Add keyword static
to class declaration:
// this is a non-generic static class
public static class LinqHelper
{
}
set python to python2.7 before running npm install
Linux:
export PYTHON=python2.7
Windows:
set PYTHON=python2.7
The Best way is do not use any pairing code.
Instead of onClick
go to other function or other class where You create the socket using UUID.
Android automatically pops up for pairing if already not paired.
or see this link for better understanding
Below is code for the same:
private OnItemClickListener mDeviceClickListener = new OnItemClickListener() {
public void onItemClick(AdapterView<?> av, View v, int arg2, long arg3) {
// Cancel discovery because it's costly and we're about to connect
mBtAdapter.cancelDiscovery();
// Get the device MAC address, which is the last 17 chars in the View
String info = ((TextView) v).getText().toString();
String address = info.substring(info.length() - 17);
// Create the result Intent and include the MAC address
Intent intent = new Intent();
intent.putExtra(EXTRA_DEVICE_ADDRESS, address);
// Set result and finish this Activity
setResult(Activity.RESULT_OK, intent);
// **add this 2 line code**
Intent myIntent = new Intent(view.getContext(), Connect.class);
startActivityForResult(myIntent, 0);
finish();
}
};
Connect.java file is :
public class Connect extends Activity {
private static final String TAG = "zeoconnect";
private ByteBuffer localByteBuffer;
private InputStream in;
byte[] arrayOfByte = new byte[4096];
int bytes;
public BluetoothDevice mDevice;
@Override
public void onCreate(Bundle savedInstanceState) {
super.onCreate(savedInstanceState);
setContentView(R.layout.connect);
try {
setup();
} catch (ZeoMessageException e) {
// TODO Auto-generated catch block
e.printStackTrace();
} catch (ZeoMessageParseException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
private void setup() throws ZeoMessageException, ZeoMessageParseException {
// TODO Auto-generated method stub
getApplicationContext().registerReceiver(receiver,
new IntentFilter(BluetoothDevice.ACTION_ACL_CONNECTED));
getApplicationContext().registerReceiver(receiver,
new IntentFilter(BluetoothDevice.ACTION_ACL_DISCONNECTED));
BluetoothDevice zee = BluetoothAdapter.getDefaultAdapter().
getRemoteDevice("**:**:**:**:**:**");// add device mac adress
try {
sock = zee.createRfcommSocketToServiceRecord(
UUID.fromString("*******************")); // use unique UUID
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
Log.d(TAG, "++++ Connecting");
try {
sock.connect();
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
Log.d(TAG, "++++ Connected");
try {
in = sock.getInputStream();
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
Log.d(TAG, "++++ Listening...");
while (true) {
try {
bytes = in.read(arrayOfByte);
Log.d(TAG, "++++ Read "+ bytes +" bytes");
} catch (IOException e1) {
// TODO Auto-generated catch block
e1.printStackTrace();
}
Log.d(TAG, "++++ Done: test()");
}}
private static final LogBroadcastReceiver receiver = new LogBroadcastReceiver();
public static class LogBroadcastReceiver extends BroadcastReceiver {
@Override
public void onReceive(Context paramAnonymousContext, Intent paramAnonymousIntent) {
Log.d("ZeoReceiver", paramAnonymousIntent.toString());
Bundle extras = paramAnonymousIntent.getExtras();
for (String k : extras.keySet()) {
Log.d("ZeoReceiver", " Extra: "+ extras.get(k).toString());
}
}
};
private BluetoothSocket sock;
@Override
public void onDestroy() {
getApplicationContext().unregisterReceiver(receiver);
if (sock != null) {
try {
sock.close();
} catch (IOException e) {
e.printStackTrace();
}
}
super.onDestroy();
}
}
Browsers aren't allowed to access the local file system unless you're accessing a local html page. You have to upload the image somewhere. If it's in the same directory as the html file, then you can use <img src="localfile.jpg"/>
I have created a library to batch delete or update records with a round trip on EF Core 5.
Sample code as follows:
await ctx.DeleteRangeAsync(b => b.Price > n || b.AuthorName == "zack yang");
await ctx.BatchUpdate()
.Set(b => b.Price, b => b.Price + 3)
.Set(b=>b.AuthorName,b=>b.Title.Substring(3,2)+b.AuthorName.ToUpper())
.Set(b => b.PubTime, b => DateTime.Now)
.Where(b => b.Id > n || b.AuthorName.StartsWith("Zack"))
.ExecuteAsync();
Github repository: https://github.com/yangzhongke/Zack.EFCore.Batch Report: https://www.reddit.com/r/dotnetcore/comments/k1esra/how_to_batch_delete_or_update_in_entity_framework/
Try this...
<div class="row">
<div class="col-xs-3">
Menu
</div>
<div class="col-xs-9">
<div class="row">
<div class="col-sm-4 col-sm-push-8">
Right content
</div>
<div class="col-sm-8 col-sm-pull-4">
Content
</div>
</div>
</div>
</div>
Aligning to 6 bytes is not weird, because it is aligning to addresses multiple to 4.
So basically you have 34 bytes in your structure and the next structure should be placed on the address, that is multiple to 4. The closest value after 34 is 36. And this padding area counts into the size of the structure.
Try add this command in your build.gradle
So for running configure that shape:
This is what worked for me on LinuxMint 19.
curl -s https://yum.dockerproject.org/gpg | sudo apt-key add
apt-key fingerprint 58118E89F3A912897C070ADBF76221572C52609D
sudo add-apt-repository "deb https://apt.dockerproject.org/repo ubuntu-$(lsb_release -cs) main"
sudo apt-get update
sudo apt-get install docker-ce docker-ce-cli containerd.io
Personnaly I encountered this issue while migrating a IIS6 website into IIS7, in order to fix this issue I used this command line :
%windir%\System32\inetsrv\appcmd migrate config "MyWebSite\"
Make sure to backup your web.config
Add a icon file named "favicon.ico" to the root of your website.
When using iconv
, the parameter locale must be set:
function test_enc($text = 'ešcržýáíé EŠCRŽÝÁÍÉ fóø bår FÓØ BÅR æ')
{
echo '<tt>';
echo iconv('utf8', 'ascii//TRANSLIT', $text);
echo '</tt><br/>';
}
test_enc();
setlocale(LC_ALL, 'cs_CZ.utf8');
test_enc();
setlocale(LC_ALL, 'en_US.utf8');
test_enc();
Yields into:
????????? ????????? f?? b?r F?? B?R ae
escrzyaie ESCRZYAIE fo? bar FO? BAR ae
escrzyaie ESCRZYAIE fo? bar FO? BAR ae
Another locales then cs_CZ and en_US I haven't installed and I can't test it.
In C# I see solution using translation to unicode normalized form - accents are splitted out and then filtered via nonspacing unicode category.
The code that worked for me is:
ThisWorkbook.Sheets("sheetName").Activate
var rndval=items[Math.floor(Math.random()*items.length)];
The question is already answered, however when I first saw it I thought of NodeJS Buffer. But it is way slower than the +, so it is likely that nothing can be faster than + in string concetanation.
Tested with the following code:
function a(){
var s = "hello";
var p = "world";
s = s + p;
return s;
}
function b(){
var s = new Buffer("hello");
var p = new Buffer("world");
s = Buffer.concat([s,p]);
return s;
}
var times = 100000;
var t1 = new Date();
for( var i = 0; i < times; i++){
a();
}
var t2 = new Date();
console.log("Normal took: " + (t2-t1) + " ms.");
for ( var i = 0; i < times; i++){
b();
}
var t3 = new Date();
console.log("Buffer took: " + (t3-t2) + " ms.");
Output:
Normal took: 4 ms.
Buffer took: 458 ms.
You can simply add show.legend=FALSE
to geom to suppress the corresponding legend
simple google search turned up this:
If the data is actually an HTML page and has NOT been created by ASP, PHP, or some other scripting language, and you are using Internet Explorer 6, and you have Excel installed on your computer, simply right-click on the page and look through the menu. You should see "Export to Microsoft Excel." If all these conditions are true, click on the menu item and after a few prompts it will be imported to Excel.
if you can't do that, he gives an alternate "drag-and-drop" method:
google-services.json file is unnecessary to receive notifications. Just add a variable for each flavour in your build.gradle file:
buildConfigField "String", "GCM_SENDER_ID", "\"111111111111\""
Use this variable BuildConfig.GCM_SENDER_ID instead of getString(R.string.gcm_defaultSenderId) while registering:
instanceID.getToken(BuildConfig.GCM_SENDER_ID, GoogleCloudMessaging.INSTANCE_ID_SCOPE, null);
For those who like one-liner solutions in ES6, something like this:
const arrayOne = [ _x000D_
{ value: "4a55eff3-1e0d-4a81-9105-3ddd7521d642", display: "Jamsheer" },_x000D_
{ value: "644838b3-604d-4899-8b78-09e4799f586f", display: "Muhammed" },_x000D_
{ value: "b6ee537a-375c-45bd-b9d4-4dd84a75041d", display: "Ravi" },_x000D_
{ value: "e97339e1-939d-47ab-974c-1b68c9cfb536", display: "Ajmal" },_x000D_
{ value: "a63a6f77-c637-454e-abf2-dfb9b543af6c", display: "Ryan" },_x000D_
];_x000D_
_x000D_
const arrayTwo = [_x000D_
{ value: "4a55eff3-1e0d-4a81-9105-3ddd7521d642", display: "Jamsheer"},_x000D_
{ value: "644838b3-604d-4899-8b78-09e4799f586f", display: "Muhammed"},_x000D_
{ value: "b6ee537a-375c-45bd-b9d4-4dd84a75041d", display: "Ravi"},_x000D_
{ value: "e97339e1-939d-47ab-974c-1b68c9cfb536", display: "Ajmal"},_x000D_
];_x000D_
_x000D_
const results = arrayOne.filter(({ value: id1 }) => !arrayTwo.some(({ value: id2 }) => id2 === id1));_x000D_
_x000D_
console.log(results);
_x000D_
Using Microsoft SQL Server Management Studio you can create Linked Server
. First make connection to current (local) server, then go to Server Objects
> Linked Servers
> context menu > New Linked Server
. In window New Linked Server
you have to specify desired server name for remote server, real server name or IP address (Data Source) and credentials (Security page).
And further you can select data from linked server:
select * from [linked_server_name].[database].[schema].[table]
Java says that the default length of a JAVA array at the time of initialization will be 10.
private static final int DEFAULT_CAPACITY = 10;
But the size()
method returns the number of inserted elements in the array, and since at the time of initialization, if you have not inserted any element in the array, it will return zero.
private int size;
public boolean add(E e) {
ensureCapacityInternal(size + 1); // Increments modCount!!
elementData[size++] = e;
return true;
}
public void add(int index, E element) {
rangeCheckForAdd(index);
ensureCapacityInternal(size + 1); // Increments modCount!!
System.arraycopy(elementData, index, elementData, index + 1,size - index);
elementData[index] = element;
size++;
}
Mine is that if you set MASM listing file option some extra selection, it will give you this error.
Just use
Enable Assembler Generated Code Listing Yes/Sg
Assembled Code Listing $(ProjectName).lst
it is fine.
But any extra you have issue.
In order to check if an object is compatible with a given type variable, instead of writing
u is t
you should write
typeof(t).IsInstanceOfType(u)
For Android Studio add the following to your top-level build.gradle
file within the allprojects
block
tasks.withType(JavaCompile) {
options.compilerArgs << "-Xlint:unchecked" << "-Xlint:deprecation"
}
Callback can be very helpful in Java.
Using Callback you can notify another Class of an asynchronous action that has completed with success or error.
Select last_name, round (sysdate-hire_date)/7,0) as tuner
from employees
Where department_id = 90
order by last_name;
If want to totally delete it use del
:
del your_variable
Or otherwise, to make the value None
:
your_variable = None
If it's a mutable iterable (lists, sets, dictionaries, etc, but not tuples because they're immutable), you can make it empty like:
your_variable.clear()
Then your_variable
will be empty
Update as of August 2020:
This is what worked for me recently -
Upload your file and get a shareable link which anyone can see(Change permission from "Restricted" to "Anyone with the Link" in the share link options)
Then run:
SHAREABLE_LINK=<google drive shareable link>
curl -L https://drive.google.com/uc\?id\=$(echo $SHAREABLE_LINK | cut -f6 -d"/")
When set the new value of element, you need call trigger change.
$('element').val(newValue).trigger('change');
As for a tool I started using, I suggest firecamp Its like Postman, but it also supports websockets and socket.io.
You can use the compact language construct to test for the existence of a null variable. Variables that do not exist will not turn up in the result, while null values will show.
$x = null;
$y = 'y';
$r = compact('x', 'y', 'z');
print_r($r);
// Output:
// Array (
// [x] =>
// [y] => y
// )
In the case of your example:
if (compact('v')) {
// True if $v exists, even when null.
// False on var $v; without assignment and when $v does not exist.
}
Of course for variables in global scope you can also use array_key_exists().
B.t.w. personally I would avoid situations like the plague where there is a semantic difference between a variable not existing and the variable having a null value. PHP and most other languages just does not think there is.
You can't do the second, because even if you use the remove()
method on Iterator, you'll get an Exception thrown.
Personally, I would prefer the first for all Collection
instances, despite the additional overheard of creating the new Collection
, I find it less prone to error during edit by other developers. On some Collection implementations, the Iterator remove()
is supported, on other it isn't. You can read more in the docs for Iterator.
The third alternative, is to create a new Collection
, iterate over the original, and add all the members of the first Collection
to the second Collection
that are not up for deletion. Depending on the size of the Collection
and the number of deletes, this could significantly save on memory, when compared to the first approach.
You have to use Kendo UI DropDownList select
method (documentation in here).
Basically you should:
// get a reference to the dropdown list
var dropdownlist = $("#Instrument").data("kendoDropDownList");
If you know the index you can use:
// selects by index
dropdownlist.select(1);
If not, use:
// selects item if its text is equal to "test" using predicate function
dropdownlist.select(function(dataItem) {
return dataItem.symbol === "test";
});
JSFiddle example here
Do not simplify the code to avoid "linq translation error": The test consist between a date with time at 0:0:0 and the same date with time at 23:59:59
iFilter.MyDate1 = DateTime.Today; // or DateTime.MinValue
// GET
var tempQuery = ctx.MyTable.AsQueryable();
if (iFilter.MyDate1 != DateTime.MinValue)
{
TimeSpan temp24h = new TimeSpan(23,59,59);
DateTime tempEndMyDate1 = iFilter.MyDate1.Add(temp24h);
// DO not change the code below, you need 2 date variables...
tempQuery = tempQuery.Where(w => w.MyDate2 >= iFilter.MyDate1
&& w.MyDate2 <= tempEndMyDate1);
}
List<MyTable> returnObject = tempQuery.ToList();
You need to enable CORS in your Web Api. The easier and preferred way to enable CORS globally is to add the following into web.config
<system.webServer>
<httpProtocol>
<customHeaders>
<add name="Access-Control-Allow-Origin" value="*" />
<add name="Access-Control-Allow-Headers" value="Content-Type" />
<add name="Access-Control-Allow-Methods" value="GET, POST, PUT, DELETE, OPTIONS" />
</customHeaders>
</httpProtocol>
</system.webServer>
Please note that the Methods are all individually specified, instead of using *
. This is because there is a bug occurring when using *
.
You can also enable CORS by code.
Update
The following NuGet package is required: Microsoft.AspNet.WebApi.Cors
.
public static class WebApiConfig
{
public static void Register(HttpConfiguration config)
{
config.EnableCors();
// ...
}
}
Then you can use the [EnableCors]
attribute on Actions or Controllers like this
[EnableCors(origins: "http://www.example.com", headers: "*", methods: "*")]
Or you can register it globally
public static class WebApiConfig
{
public static void Register(HttpConfiguration config)
{
var cors = new EnableCorsAttribute("http://www.example.com", "*", "*");
config.EnableCors(cors);
// ...
}
}
You also need to handle the preflight Options
requests with HTTP OPTIONS
requests.
Web API
needs to respond to the Options
request in order to confirm that it is indeed configured to support CORS
.
To handle this, all you need to do is send an empty response back. You can do this inside your actions, or you can do it globally like this:
# Global.asax.cs
protected void Application_BeginRequest()
{
if (Request.Headers.AllKeys.Contains("Origin") && Request.HttpMethod == "OPTIONS")
{
Response.Flush();
}
}
This extra check was added to ensure that old APIs
that were designed to accept only GET
and POST
requests will not be exploited. Imagine sending a DELETE
request to an API
designed when this verb didn't exist. The outcome is unpredictable and the results might be dangerous.
I was facing the same issue then i made my key 16 byte and it's working properly now. Create your key exactly 16 byte. It will surely work.
Trim()
will only remove leading or trailing spaces.
Try .Contains()
to check if a string contains white space
"sossjjs sskkk".Contains(" ") // returns true
select *
into existing table database..existingtable
from database..othertables....
If you have used select * into tablename from other tablenames
already, next time, to append, you say select * into existing table tablename from other tablenames
You have a truncated array representation. Let's look at a full example:
>>> a = np.zeros((2, 3, 4))
>>> a
array([[[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]],
[[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]]])
Arrays in NumPy are printed as the word array
followed by structure, similar to embedded Python lists. Let's create a similar list:
>>> l = [[[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]],
[[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.],
[ 0., 0., 0., 0.]]]
>>> l
[[[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]],
[[0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0], [0.0, 0.0, 0.0, 0.0]]]
The first level of this compound list l
has exactly 2 elements, just as the first dimension of the array a
(# of rows). Each of these elements is itself a list with 3 elements, which is equal to the second dimension of a
(# of columns). Finally, the most nested lists have 4 elements each, same as the third dimension of a
(depth/# of colors).
So you've got exactly the same structure (in terms of dimensions) as in Matlab, just printed in another way.
Some caveats:
Matlab stores data column by column ("Fortran order"), while NumPy by default stores them row by row ("C order"). This doesn't affect indexing, but may affect performance. For example, in Matlab efficient loop will be over columns (e.g. for n = 1:10 a(:, n) end
), while in NumPy it's preferable to iterate over rows (e.g. for n in range(10): a[n, :]
-- note n
in the first position, not the last).
If you work with colored images in OpenCV, remember that:
2.1. It stores images in BGR format and not RGB, like most Python libraries do.
2.2. Most functions work on image coordinates (x, y
), which are opposite to matrix coordinates (i, j
).
Unless you resize up, you cannot do this with raster graphics.
What you can do with good filtering and smoothing is to resize without losing any noticable quality.
You can also alter the DPI metadata of the image (assuming it has some) which will keep exactly the same pixel count, but will alter how image editors think of it in 'real-world' measurements.
And just to cover all bases, if you really meant just the file size of the image and not the actual image dimensions, I suggest you look at a lossless encoding of the image data. My suggestion for this would be to resave the image as a .png file (I tend to use paint as a free transcoder for images in windows. Load image in paint, save as in the new format)
Safest method, which I use frequently:
git clean -fd
Syntax explanation as per /docs/git-clean
page:
-f
(alias: --force
). If the Git configuration variable clean.requireForce is not set to false, git clean will refuse to delete files or directories unless given -f, -n or -i. Git will refuse to delete directories with .git sub directory or file unless a second -f is given.-d
. Remove untracked directories in addition to untracked files. If an untracked directory is managed by a different Git repository, it is not removed by default. Use -f option twice if you really want to remove such a directory.As mentioned in the comments, it might be preferable to do a git clean -nd
which does a dry run and tells you what would be deleted before actually deleting it.
Link to git clean
doc page:
https://git-scm.com/docs/git-clean
No, you cannot directly change an account status from EXPIRE(GRACE) to OPEN without resetting the password.
The documentation says:
If you cause a database user's password to expire with PASSWORD EXPIRE, then the user (or the DBA) must change the password before attempting to log into the database following the expiration.
However, you can indirectly change the status to OPEN by resetting the user's password hash to the existing value. Unfortunately, setting the password hash to itself has the following complications, and almost every other solution misses at least one of these issues:
DEFAULT
, that is a pointer to the DEFAULT
profile's value. We may need to recursively check the profile.The following, ridiculously large PL/SQL block, should handle all of those cases. It should reset any account to OPEN, with the same password hash, regardless of Oracle version or profile settings. And the profile will be changed back to the original limits.
--Purpose: Change a user from EXPIRED to OPEN by setting a user's password to the same value.
--This PL/SQL block requires elevated privileges and should be run as SYS.
--This task is difficult because we need to temporarily change profiles to avoid
-- errors like "ORA-28007: the password cannot be reused".
--
--How to use: Run as SYS in SQL*Plus and enter the username when prompted.
-- If using another IDE, manually replace the variable two lines below.
declare
v_username varchar2(128) := trim(upper('&USERNAME'));
--Do not change anything below this line.
v_profile varchar2(128);
v_old_password_reuse_time varchar2(128);
v_uses_default_for_time varchar2(3);
v_old_password_reuse_max varchar2(128);
v_uses_default_for_max varchar2(3);
v_alter_user_sql varchar2(4000);
begin
--Get user's profile information.
--(This is tricky because there could be an indirection to the DEFAULT profile.
select
profile,
case when user_password_reuse_time = 'DEFAULT' then default_password_reuse_time else user_password_reuse_time end password_reuse_time,
case when user_password_reuse_time = 'DEFAULT' then 'Yes' else 'No' end uses_default_for_time,
case when user_password_reuse_max = 'DEFAULT' then default_password_reuse_max else user_password_reuse_max end password_reuse_max,
case when user_password_reuse_max = 'DEFAULT' then 'Yes' else 'No' end uses_default_for_max
into v_profile, v_old_password_reuse_time, v_uses_default_for_time, v_old_password_reuse_max, v_uses_default_for_max
from
(
--User's profile information.
select
dba_profiles.profile,
max(case when resource_name = 'PASSWORD_REUSE_TIME' then limit else null end) user_password_reuse_time,
max(case when resource_name = 'PASSWORD_REUSE_MAX' then limit else null end) user_password_reuse_max
from dba_profiles
join dba_users
on dba_profiles.profile = dba_users.profile
where username = v_username
group by dba_profiles.profile
) users_profile
cross join
(
--Default profile information.
select
max(case when resource_name = 'PASSWORD_REUSE_TIME' then limit else null end) default_password_reuse_time,
max(case when resource_name = 'PASSWORD_REUSE_MAX' then limit else null end) default_password_reuse_max
from dba_profiles
where profile = 'DEFAULT'
) default_profile;
--Get user's password information.
select
'alter user '||name||' identified by values '''||
spare4 || case when password is not null then ';' else null end || password ||
''''
into v_alter_user_sql
from sys.user$
where name = v_username;
--Change profile limits, if necessary.
if v_old_password_reuse_time <> 'UNLIMITED' then
execute immediate 'alter profile '||v_profile||' limit password_reuse_time unlimited';
end if;
if v_old_password_reuse_max <> 'UNLIMITED' then
execute immediate 'alter profile '||v_profile||' limit password_reuse_max unlimited';
end if;
--Change the user's password.
execute immediate v_alter_user_sql;
--Change the profile limits back, if necessary.
if v_old_password_reuse_time <> 'UNLIMITED' then
if v_uses_default_for_time = 'Yes' then
execute immediate 'alter profile '||v_profile||' limit password_reuse_time default';
else
execute immediate 'alter profile '||v_profile||' limit password_reuse_time '||v_old_password_reuse_time;
end if;
end if;
if v_old_password_reuse_max <> 'UNLIMITED' then
if v_uses_default_for_max = 'Yes' then
execute immediate 'alter profile '||v_profile||' limit password_reuse_max default';
else
execute immediate 'alter profile '||v_profile||' limit password_reuse_max '||v_old_password_reuse_max;
end if;
end if;
end;
/
Instead of VALUES
part of INSERT
query, just use SELECT
query as below.
INSERT INTO table1 ( column1 , 2, 3... )
SELECT col1, 2, 3... FROM table2
$mail -> CharSet = "UTF-8";
$mail = new PHPMailer();
line $mail -> CharSet = "UTF-8";
must be after $mail = new PHPMailer();
and with no spaces!
try this
$mail = new PHPMailer();
$mail->CharSet = "UTF-8";
Sounds like you may already have your answer but in case you need one more place to look you may want to check out the size and activity of your temp DB. We had an issue like this once at a client site where a few times a day their performance would horribly degrade and occasionally timeout. The problem turned out to be a separate application that was thrashing the temp DB so much it was affecting overall server performance.
Good luck with the continued troubleshooting!
Route::post('login', 'LoginController@login')->name('login')
works very well and it is clean and self-explanatory
The steps are very simple and it'll take just few mins. 1.Go to your C drive and in that go to the 'USER' section. 2.Under 'USER' section go to your 'name(e.g-'user1') and then find ".eclipse" folder and delete that folder 3.Along with that folder also delete "eclipse" folder and you can find that you're work has been done completely.
You can concatenate an array of values together using PowerShell's `-join' operator. Here is an example:
$FilePath = '{0}\temp\scripts\pshell\dump.txt' -f $env:SystemDrive;
$Computer = 'pc1';
$Speed = 9001;
$RegCheck = $true;
$Computer,$Speed,$RegCheck -join ',' | Out-File -FilePath $FilePath -Append -Width 200;
Output
pc1,9001,True
Check the below code this work for me
@ManifestNoList VARCHAR(MAX)
WHERE
(
ManifestNo IN (SELECT value FROM dbo.SplitString(@ManifestNoList, ','))
)
Use this method:
public static boolean isOnline() {
ConnectivityManager cm = (ConnectivityManager) context
.getSystemService(Context.CONNECTIVITY_SERVICE);
NetworkInfo netInfo = cm.getActiveNetworkInfo();
return netInfo != null && netInfo.isConnectedOrConnecting();
}
This is the permission needed:
<uses-permission android:name="android.permission.ACCESS_NETWORK_STATE" />
just use -
if(!a) // if a is negative,undefined,null,empty value then...
{
// do whatever
}
else {
// do whatever
}
this works because of the == difference from === in javascript, which converts some values to "equal" values in other types to check for equality, as opposed for === which simply checks if the values equal. so basically the == operator know to convert the "", null, undefined to a false value. which is exactly what you need.
As of TypeScript 2.3 (or should I say 2.4, as in 2.3 this feature contains a bug which has been fixed in [email protected]), you can create a custom transformer to achieve what you want to do.
Actually, I have already created such a custom transformer, which enables the following.
https://github.com/kimamula/ts-transformer-keys
import { keys } from 'ts-transformer-keys';
interface Props {
id: string;
name: string;
age: number;
}
const keysOfProps = keys<Props>();
console.log(keysOfProps); // ['id', 'name', 'age']
Unfortunately, custom transformers are currently not so easy to use. You have to use them with the TypeScript transformation API instead of executing tsc command. There is an issue requesting a plugin support for custom transformers.
One way is using the immediate selector and negative margin. This rule will select a list right after a paragraph, so it's just setting a negative margin-top.
p + ul {
margin-top: -XX;
}
The discussion here gives the answer:
Actually, the specific discussion I witnessed did in fact hinge over functional purity. In an expression, there are frequently assumptions made about not having side-effects. Having ForEach is specifically inviting side-effects rather than just putting up with them. -- Keith Farmer (Partner)
Basically the decision was made to keep the extension methods functionally "pure". A ForEach would encourage side-effects when using the Enumerable extension methods, which was not the intent.
You can also use printf to pipe the input to your script.
var=val
printf "yes\nno\nmaybe\n$var\n" | ./your_script.sh
From memory, you call stringstream::str()
to get the std::string
value out.
If you are using pm2, you can use it with autorestart
set to false
:
$ pm2 ecosystem
This will generate a sample ecosystem.config.js
:
module.exports = {
apps: [
{
script: './scripts/companies.js',
autorestart: false,
},
{
script: './scripts/domains.js',
autorestart: false,
},
{
script: './scripts/technologies.js',
autorestart: false,
},
],
}
$ pm2 start ecosystem.config.js
function abc() {
var color = document.getElementById("test").style.color;
color = (color=="red") ? "black" : "red" ;
document.getElementById("test").style.color= color;
}
must please see guys that the error is in the cv2.imread() .Give the right path of the image. and firstly, see if your system loads the image or not. this can be checked first by simple load of image using cv2.imread(). after that ,see this code for the face detection
import numpy as np
import cv2
cascPath = "/Users/mayurgupta/opt/anaconda3/lib/python3.7/site- packages/cv2/data/haarcascade_frontalface_default.xml"
eyePath = "/Users/mayurgupta/opt/anaconda3/lib/python3.7/site-packages/cv2/data/haarcascade_eye.xml"
smilePath = "/Users/mayurgupta/opt/anaconda3/lib/python3.7/site-packages/cv2/data/haarcascade_smile.xml"
face_cascade = cv2.CascadeClassifier(cascPath)
eye_cascade = cv2.CascadeClassifier(eyePath)
smile_cascade = cv2.CascadeClassifier(smilePath)
img = cv2.imread('WhatsApp Image 2020-04-04 at 8.43.18 PM.jpeg')
gray = cv2.cvtColor(img, cv2.COLOR_BGR2GRAY)
faces = face_cascade.detectMultiScale(gray, 1.3, 5)
for (x,y,w,h) in faces:
img = cv2.rectangle(img,(x,y),(x+w,y+h),(255,0,0),2)
roi_gray = gray[y:y+h, x:x+w]
roi_color = img[y:y+h, x:x+w]
eyes = eye_cascade.detectMultiScale(roi_gray)
for (ex,ey,ew,eh) in eyes:
cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2)
cv2.imshow('img',img)
cv2.waitKey(0)
cv2.destroyAllWindows()
Here, cascPath ,eyePath ,smilePath should have the right actual path that's picked up from lib/python3.7/site-packages/cv2/data here this path should be to picked up the haarcascade files
Those who are new to MySQL & Mac users; Note that, Connection is different than Database.
Steps to create a database.
Step 1: Create connection and click to go inside
Step 2: Click on database icon
Step 3: Name your database schema
Step 4: Apply query
Step 5: Your DB created, enjoy...
$('.tag1, .tag2').on('click', function() {
if ($(this).hasClass('clickedTag')){
// code here
} else {
// and here
}
});
or
function dothing() {
if ($(this).hasClass('clickedTag')){
// code here
} else {
// and here
}
}
$('.tag1, .tag2').on('click', dothing);
or
$('[class^=tag]').on('click', dothing);
Convert the series to a dataframe and transpose it, then append normally.
srs = srs.to_frame().T
df = df.append(srs)
There are many good scripts above for generating insert statements, but I attempted one of my own to make it as user friendly as possible and to also be able to do UPDATE statements. + package the result ready for .sql files that can be stored by date.
It takes as input your normal SELECT statement with WHERE clause, then outputs a list of Insert statements and update statements. Together they form a sort of IF NOT EXISTS () INSERT ELSE UPDATE It is handy too when there are non-updatable columns that need exclusion from the final INSERT/UPDATE statement.
Another thing that below script can do is: it can even handle INNER JOINs with other tables as input statement for the stored proc. It can be handy as a poor man's Release management tool that sits right at your finger tips where you are typing the sql SELECT statements all day.
original post : Generate UPDATE statement in SQL Server for specific table
CREATE PROCEDURE [dbo].[sp_generate_updates] (
@fullquery nvarchar(max) = '',
@ignore_field_input nvarchar(MAX) = '',
@PK_COLUMN_NAME nvarchar(MAX) = ''
)
AS
SET NOCOUNT ON
SET QUOTED_IDENTIFIER ON
/*
-- For Standard USAGE: (where clause is mandatory)
EXEC [sp_generate_updates] 'select * from dbo.mytable where mytext=''1'' '
OR
SET QUOTED_IDENTIFIER OFF
EXEC [sp_generate_updates] "select * from dbo.mytable where mytext='1' "
-- For ignoring specific columns (to ignore in the UPDATE and INSERT SQL statement)
EXEC [sp_generate_updates] 'select * from dbo.mytable where 1=1 ' , 'Column01,Column02'
-- For just updates without insert statement (replace the * )
EXEC [sp_generate_updates] 'select Column01, Column02 from dbo.mytable where 1=1 '
-- For tables without a primary key: construct the key in the third variable
EXEC [sp_generate_updates] 'select * from dbo.mytable where 1=1 ' ,'','your_chosen_primary_key_Col1,key_Col2'
-- For complex updates with JOINED tables
EXEC [sp_generate_updates] 'select o1.Name, o1.category, o2.name+ '_hello_world' as #name
from overnightsetting o1
inner join overnightsetting o2 on o1.name=o2.name
where o1.name like '%appserver%'
(REMARK about above: the use of # in front of a column name (so #abc) can do an update of that columname (abc) with any column from an inner joined table where you use the alias #abc )
-------------README for the deeper interested person:
Goal of the Stored PROCEDURE is to get updates from simple SQL SELECT statements. It is made ot be simple but fast and powerfull. As always => power is nothing without control, so check before you execute.
Its power sits also in the fact that you can make insert statements, so combined gives you a "IF NOT EXISTS() INSERT " capability.
The scripts work were there are primary keys or identity columns on table you want to update (/ or make inserts for).
It will also works when no primary keys / identity column exist(s) and you define them yourselve. But then be carefull (duplicate hits can occur). When the table has a primary key it will always be used.
The script works with a real temporary table, made on the fly (APPROPRIATE RIGHTS needed), to put the values inside from the script, then add 3 columns for constructing the "insert into tableX (...) values ()" , and the 2 update statement.
We work with temporary structures like "where columnname = {Columnname}" and then later do the update on that temptable for the columns values found on that same line.
example "where columnname = {Columnname}" for birthdate becomes "where birthdate = {birthdate}" an then we find the birthdate value on that line inside the temp table.
So then the statement becomes "where birthdate = {19800417}"
Enjoy releasing scripts as of now... by Pieter van Nederkassel - freeware "CC BY-SA" (+use at own risk)
*/
IF OBJECT_ID('tempdb..#ignore','U') IS NOT NULL DROP TABLE #ignore
DECLARE @stringsplit_table TABLE (col nvarchar(255), dtype nvarchar(255)) -- table to store the primary keys or identity key
DECLARE @PK_condition nvarchar(512), -- placeholder for WHERE pk_field1 = pk_value1 AND pk_field2 = pk_value2 AND ...
@pkstring NVARCHAR(512), -- sting to store the primary keys or the idendity key
@table_name nvarchar(512), -- (left) table name, including schema
@table_N_where_clause nvarchar(max), -- tablename
@table_alias nvarchar(512), -- holds the (left) table alias if one available, else @table_name
@table_schema NVARCHAR(30), -- schema of @table_name
@update_list1 NVARCHAR(MAX), -- placeholder for SET fields section of update
@update_list2 NVARCHAR(MAX), -- placeholder for SET fields section of update value comming from other tables in the join, other than the main table to update => updateof base table possible with inner join
@list_all_cols BIT = 0, -- placeholder for values for the insert into table VALUES command
@select_list NVARCHAR(MAX), -- placeholder for SELECT fields of (left) table
@COLUMN_NAME NVARCHAR(255), -- will hold column names of the (left) table
@sql NVARCHAR(MAX), -- sql statement variable
@getdate NVARCHAR(17), -- transform getdate() to YYYYMMDDHHMMSSMMM
@tmp_table NVARCHAR(255), -- will hold the name of a physical temp table
@pk_separator NVARCHAR(1), -- separator used in @PK_COLUMN_NAME if provided (only checking obvious ones ,;|-)
@COLUMN_NAME_DATA_TYPE NVARCHAR(100), -- needed for insert statements to convert to right text string
@own_pk BIT = 0 -- check if table has PK (0) or if provided PK will be used (1)
set @ignore_field_input=replace(replace(replace(@ignore_field_input,' ',''),'[',''),']','')
set @PK_COLUMN_NAME= replace(replace(replace(@PK_COLUMN_NAME, ' ',''),'[',''),']','')
-- first we remove all linefeeds from the user query
set @fullquery=replace(replace(replace(@fullquery,char(10),''),char(13),' '),' ',' ')
set @table_N_where_clause=@fullquery
if charindex ('order by' , @table_N_where_clause) > 0
print ' WARNING: ORDER BY NOT ALLOWED IN UPDATE ...'
if @PK_COLUMN_NAME <> ''
select ' WARNING: IF you select your own primary keys, make double sure before doing the update statements below!! '
--print @table_N_where_clause
if charindex ('select ' , @table_N_where_clause) = 0
set @table_N_where_clause= 'select * from ' + @table_N_where_clause
if charindex ('select ' , @table_N_where_clause) > 0
exec (@table_N_where_clause)
set @table_N_where_clause=rtrim(ltrim(substring(@table_N_where_clause,CHARINDEX(' from ', @table_N_where_clause )+6, 4000)))
--print @table_N_where_clause
set @table_name=left(@table_N_where_clause,CHARINDEX(' ', @table_N_where_clause )-1)
IF CHARINDEX('where ', @table_N_where_clause) > 0 SELECT @table_alias = LTRIM(RTRIM(REPLACE(REPLACE(SUBSTRING(@table_N_where_clause,1, CHARINDEX('where ', @table_N_where_clause )-1),'(nolock)',''),@table_name,'')))
IF CHARINDEX('join ', @table_alias) > 0 SELECT @table_alias = SUBSTRING(@table_alias, 1, CHARINDEX(' ', @table_alias)-1) -- until next space
IF LEN(@table_alias) = 0 SELECT @table_alias = @table_name
IF (charindex (' *' , @fullquery) > 0 or charindex (@table_alias+'.*' , @fullquery) > 0 ) set @list_all_cols=1
/*
print @fullquery
print @table_alias
print @table_N_where_clause
print @table_name
*/
-- Prepare PK condition
SELECT @table_schema = CASE WHEN CHARINDEX('.',@table_name) > 0 THEN LEFT(@table_name, CHARINDEX('.',@table_name)-1) ELSE 'dbo' END
SELECT @PK_condition = ISNULL(@PK_condition + ' AND ', '') + QUOTENAME('pk_'+COLUMN_NAME) + ' = ' + QUOTENAME('pk_'+COLUMN_NAME,'{')
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE
WHERE OBJECTPROPERTY(OBJECT_ID(CONSTRAINT_SCHEMA + '.' + QUOTENAME(CONSTRAINT_NAME)), 'IsPrimaryKey') = 1
AND TABLE_NAME = REPLACE(@table_name,@table_schema+'.','')
AND TABLE_SCHEMA = @table_schema
SELECT @pkstring = ISNULL(@pkstring + ', ', '') + @table_alias + '.' + QUOTENAME(COLUMN_NAME) + ' AS pk_' + COLUMN_NAME
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE i1
WHERE OBJECTPROPERTY(OBJECT_ID(i1.CONSTRAINT_SCHEMA + '.' + QUOTENAME(i1.CONSTRAINT_NAME)), 'IsPrimaryKey') = 1
AND i1.TABLE_NAME = REPLACE(@table_name,@table_schema+'.','')
AND i1.TABLE_SCHEMA = @table_schema
-- if no primary keys exist then we try for identity columns
IF @PK_condition is null SELECT @PK_condition = ISNULL(@PK_condition + ' AND ', '') + QUOTENAME('pk_'+COLUMN_NAME) + ' = ' + QUOTENAME('pk_'+COLUMN_NAME,'{')
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMNPROPERTY(object_id(TABLE_SCHEMA+'.'+TABLE_NAME), COLUMN_NAME, 'IsIdentity') = 1
AND TABLE_NAME = REPLACE(@table_name,@table_schema+'.','')
AND TABLE_SCHEMA = @table_schema
IF @pkstring is null SELECT @pkstring = ISNULL(@pkstring + ', ', '') + @table_alias + '.' + QUOTENAME(COLUMN_NAME) + ' AS pk_' + COLUMN_NAME
FROM INFORMATION_SCHEMA.COLUMNS
WHERE COLUMNPROPERTY(object_id(TABLE_SCHEMA+'.'+TABLE_NAME), COLUMN_NAME, 'IsIdentity') = 1
AND TABLE_NAME = REPLACE(@table_name,@table_schema+'.','')
AND TABLE_SCHEMA = @table_schema
-- Same but in form of a table
INSERT INTO @stringsplit_table
SELECT 'pk_'+i1.COLUMN_NAME as col, i2.DATA_TYPE as dtype
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE i1
inner join INFORMATION_SCHEMA.COLUMNS i2
on i1.TABLE_NAME = i2.TABLE_NAME AND i1.TABLE_SCHEMA = i2.TABLE_SCHEMA
WHERE OBJECTPROPERTY(OBJECT_ID(i1.CONSTRAINT_SCHEMA + '.' + QUOTENAME(i1.CONSTRAINT_NAME)), 'IsPrimaryKey') = 1
AND i1.TABLE_NAME = REPLACE(@table_name,@table_schema+'.','')
AND i1.TABLE_SCHEMA = @table_schema
-- if no primary keys exist then we try for identity columns
IF 0=(select count(*) from @stringsplit_table) INSERT INTO @stringsplit_table
SELECT 'pk_'+i2.COLUMN_NAME as col, i2.DATA_TYPE as dtype
FROM INFORMATION_SCHEMA.COLUMNS i2
WHERE COLUMNPROPERTY(object_id(i2.TABLE_SCHEMA+'.'+i2.TABLE_NAME), i2.COLUMN_NAME, 'IsIdentity') = 1
AND i2.TABLE_NAME = REPLACE(@table_name,@table_schema+'.','')
AND i2.TABLE_SCHEMA = @table_schema
-- NOW handling the primary key given as parameter to the main batch
SELECT @pk_separator = ',' -- take this as default, we'll check lower if it's a different one
IF (@PK_condition IS NULL OR @PK_condition = '') AND @PK_COLUMN_NAME <> ''
BEGIN
IF CHARINDEX(';', @PK_COLUMN_NAME) > 0
SELECT @pk_separator = ';'
ELSE IF CHARINDEX('|', @PK_COLUMN_NAME) > 0
SELECT @pk_separator = '|'
ELSE IF CHARINDEX('-', @PK_COLUMN_NAME) > 0
SELECT @pk_separator = '-'
SELECT @PK_condition = NULL -- make sure to make it NULL, in case it was ''
INSERT INTO @stringsplit_table
SELECT LTRIM(RTRIM(x.value)) , 'datetime' FROM STRING_SPLIT(@PK_COLUMN_NAME, @pk_separator) x
SELECT @PK_condition = ISNULL(@PK_condition + ' AND ', '') + QUOTENAME(x.col) + ' = ' + replace(QUOTENAME(x.col,'{'),'{','{pk_')
FROM @stringsplit_table x
SELECT @PK_COLUMN_NAME = NULL -- make sure to make it NULL, in case it was ''
SELECT @PK_COLUMN_NAME = ISNULL(@PK_COLUMN_NAME + ', ', '') + QUOTENAME(x.col) + ' as pk_' + x.col
FROM @stringsplit_table x
--print 'pkcolumns '+ isnull(@PK_COLUMN_NAME,'')
update @stringsplit_table set col='pk_' + col
SELECT @own_pk = 1
END
ELSE IF (@PK_condition IS NULL OR @PK_condition = '') AND @PK_COLUMN_NAME = ''
BEGIN
RAISERROR('No Primary key or Identity column available on table. Add some columns as the third parameter when calling this SP to make your own temporary PK., also remove [] from tablename',17,1)
END
-- IF there are no primary keys or an identity key in the table active, then use the given columns as a primary key
if isnull(@pkstring,'') = '' set @pkstring = @PK_COLUMN_NAME
IF ISNULL(@pkstring, '') <> '' SELECT @fullquery = REPLACE(@fullquery, 'SELECT ','SELECT ' + @pkstring + ',' )
--print @pkstring
-- ignore fields for UPDATE STATEMENT (not ignored for the insert statement, in iserts statement we ignore only identity Columns and the columns provided with the main stored proc )
-- Place here all fields that you know can not be converted to nvarchar() values correctly, an thus should not be scripted for updates)
-- for insert we will take these fields along, although they will be incorrectly represented!!!!!!!!!!!!!.
SELECT ignore_field = 'uniqueidXXXX' INTO #ignore
UNION ALL SELECT ignore_field = 'UPDATEMASKXXXX'
UNION ALL SELECT ignore_field = 'UIDXXXXX'
UNION ALL SELECT value FROM string_split(@ignore_field_input,@pk_separator)
SELECT @getdate = REPLACE(REPLACE(REPLACE(REPLACE(CONVERT(NVARCHAR(30), GETDATE(), 121), '-', ''), ' ', ''), ':', ''), '.', '')
SELECT @tmp_table = 'Release_DATA__' + @getdate + '__' + REPLACE(@table_name,@table_schema+'.','')
SET @sql = replace( @fullquery, ' from ', ' INTO ' + @tmp_table +' from ')
----print (@sql)
exec (@sql)
SELECT @sql = N'alter table ' + @tmp_table + N' add update_stmt1 nvarchar(max), update_stmt2 nvarchar(max) , update_stmt3 nvarchar(max)'
EXEC (@sql)
-- Prepare update field list (only columns from the temp table are taken if they also exist in the base table to update)
SELECT @update_list1 = ISNULL(@update_list1 + ', ', '') +
CASE WHEN C1.COLUMN_NAME = 'ModifiedBy' THEN '[ModifiedBy] = left(right(replace(CONVERT(VARCHAR(19),[Modified],121),''''-'''',''''''''),19) +''''-''''+right(SUSER_NAME(),30),50)'
WHEN C1.COLUMN_NAME = 'Modified' THEN '[Modified] = GETDATE()'
ELSE QUOTENAME(C1.COLUMN_NAME) + ' = ' + QUOTENAME(C1.COLUMN_NAME,'{')
END
FROM INFORMATION_SCHEMA.COLUMNS c1
inner join INFORMATION_SCHEMA.COLUMNS c2
on c1.COLUMN_NAME =c2.COLUMN_NAME and c2.TABLE_NAME = REPLACE(@table_name,@table_schema+'.','') AND c2.TABLE_SCHEMA = @table_schema
WHERE c1.TABLE_NAME = @tmp_table --REPLACE(@table_name,@table_schema+'.','')
AND QUOTENAME(c1.COLUMN_NAME) NOT IN (SELECT QUOTENAME(ignore_field) FROM #ignore) -- eliminate binary, image etc value here
AND COLUMNPROPERTY(object_id(c2.TABLE_SCHEMA+'.'+c2.TABLE_NAME), c2.COLUMN_NAME, 'IsIdentity') <> 1
AND NOT EXISTS (SELECT 1
FROM INFORMATION_SCHEMA.KEY_COLUMN_USAGE ku
WHERE 1 = 1
AND ku.TABLE_NAME = c2.TABLE_NAME
AND ku.TABLE_SCHEMA = c2.TABLE_SCHEMA
AND ku.COLUMN_NAME = c2.COLUMN_NAME
AND OBJECTPROPERTY(OBJECT_ID(ku.CONSTRAINT_SCHEMA + '.' + QUOTENAME(ku.CONSTRAINT_NAME)), 'IsPrimaryKey') = 1)
AND NOT EXISTS (SELECT 1 FROM @stringsplit_table x WHERE x.col = c2.COLUMN_NAME AND @own_pk = 1)
-- Prepare update field list (here we only take columns that commence with a #, as this is our queue for doing the update that comes from an inner joined table)
SELECT @update_list2 = ISNULL(@update_list2 + ', ', '') + QUOTENAME(replace( C1.COLUMN_NAME,'#','')) + ' = ' + QUOTENAME(C1.COLUMN_NAME,'{')
FROM INFORMATION_SCHEMA.COLUMNS c1
WHERE c1.TABLE_NAME = @tmp_table --AND c1.TABLE_SCHEMA = @table_schema
AND QUOTENAME(c1.COLUMN_NAME) NOT IN (SELECT QUOTENAME(ignore_field) FROM #ignore) -- eliminate binary, image etc value here
AND c1.COLUMN_NAME like '#%'
-- similar for select list, but take all fields
SELECT @select_list = ISNULL(@select_list + ', ', '') + QUOTENAME(COLUMN_NAME)
FROM INFORMATION_SCHEMA.COLUMNS c
WHERE TABLE_NAME = REPLACE(@table_name,@table_schema+'.','')
AND TABLE_SCHEMA = @table_schema
AND COLUMNPROPERTY(object_id(TABLE_SCHEMA+'.'+TABLE_NAME), COLUMN_NAME, 'IsIdentity') <> 1 -- Identity columns are filled automatically by MSSQL, not needed at Insert statement
AND QUOTENAME(c.COLUMN_NAME) NOT IN (SELECT QUOTENAME(ignore_field) FROM #ignore) -- eliminate binary, image etc value here
SELECT @PK_condition = REPLACE(@PK_condition, '[pk_', '[')
set @select_list='if not exists (select * from '+ REPLACE(@table_name,@table_schema+'.','') +' where '+ @PK_condition +') INSERT INTO '+ REPLACE(@table_name,@table_schema+'.','') + '('+ @select_list + ') VALUES (' + replace(replace(@select_list,'[','{'),']','}') + ')'
SELECT @sql = N'UPDATE ' + @tmp_table + ' set update_stmt1 = ''' + @select_list + ''''
if @list_all_cols=1 EXEC (@sql)
--print 'select========== ' + @select_list
--print 'update========== ' + @update_list1
SELECT @sql = N'UPDATE ' + @tmp_table + N'
set update_stmt2 = CONVERT(NVARCHAR(MAX),''UPDATE ' + @table_name +
N' SET ' + @update_list1 + N''' + ''' +
N' WHERE ' + @PK_condition + N''') '
EXEC (@sql)
--print @sql
SELECT @sql = N'UPDATE ' + @tmp_table + N'
set update_stmt3 = CONVERT(NVARCHAR(MAX),''UPDATE ' + @table_name +
N' SET ' + @update_list2 + N''' + ''' +
N' WHERE ' + @PK_condition + N''') '
EXEC (@sql)
--print @sql
-- LOOPING OVER ALL base tables column for the INSERT INTO .... VALUES
DECLARE c_columns CURSOR FAST_FORWARD READ_ONLY FOR
SELECT COLUMN_NAME, DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = (CASE WHEN @list_all_cols=0 THEN @tmp_table ELSE REPLACE(@table_name,@table_schema+'.','') END )
AND TABLE_SCHEMA = @table_schema
UNION--pned
SELECT col, 'datetime' FROM @stringsplit_table
OPEN c_columns
FETCH NEXT FROM c_columns INTO @COLUMN_NAME, @COLUMN_NAME_DATA_TYPE
WHILE @@FETCH_STATUS = 0
BEGIN
SELECT @sql =
CASE WHEN @COLUMN_NAME_DATA_TYPE IN ('char','varchar','nchar','nvarchar')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt1 = REPLACE(update_stmt1, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')) '
WHEN @COLUMN_NAME_DATA_TYPE IN ('float','real','money','smallmoney')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt1 = REPLACE(update_stmt1, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'],126)), '''''''','''''''''''') + '''''''', ''NULL'')) '
WHEN @COLUMN_NAME_DATA_TYPE IN ('uniqueidentifier')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt1 = REPLACE(update_stmt1, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')) '
WHEN @COLUMN_NAME_DATA_TYPE IN ('text','ntext')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt1 = REPLACE(update_stmt1, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')) '
WHEN @COLUMN_NAME_DATA_TYPE IN ('xxxx','yyyy')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt1 = REPLACE(update_stmt1, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')) '
WHEN @COLUMN_NAME_DATA_TYPE IN ('binary','varbinary')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt1 = REPLACE(update_stmt1, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')) '
WHEN @COLUMN_NAME_DATA_TYPE IN ('XML','xml')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt1 = REPLACE(update_stmt1, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'],0)), '''''''','''''''''''') + '''''''', ''NULL'')) '
WHEN @COLUMN_NAME_DATA_TYPE IN ('datetime','smalldatetime')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt1 = REPLACE(update_stmt1, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'],121)), '''''''','''''''''''') + '''''''', ''NULL'')) '
ELSE
N'UPDATE ' + @tmp_table + N' SET update_stmt1 = REPLACE(update_stmt1, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')) '
END
----PRINT @sql
EXEC (@sql)
FETCH NEXT FROM c_columns INTO @COLUMN_NAME, @COLUMN_NAME_DATA_TYPE
END
CLOSE c_columns
DEALLOCATE c_columns
--SELECT col FROM @stringsplit_table -- these are the primary keys
-- LOOPING OVER ALL temp tables column for the Update values
DECLARE c_columns CURSOR FAST_FORWARD READ_ONLY FOR
SELECT COLUMN_NAME,DATA_TYPE
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = @tmp_table -- AND TABLE_SCHEMA = @table_schema
UNION--pned
SELECT col, 'datetime' FROM @stringsplit_table
OPEN c_columns
FETCH NEXT FROM c_columns INTO @COLUMN_NAME, @COLUMN_NAME_DATA_TYPE
WHILE @@FETCH_STATUS = 0
BEGIN
SELECT @sql =
CASE WHEN @COLUMN_NAME_DATA_TYPE IN ('char','varchar','nchar','nvarchar')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt2 = REPLACE(update_stmt2, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')), update_stmt3 = REPLACE(update_stmt3, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')) '
WHEN @COLUMN_NAME_DATA_TYPE IN ('float','real','money','smallmoney')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt2 = REPLACE(update_stmt2, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'],126)), '''''''','''''''''''') + '''''''', ''NULL'')), update_stmt3 = REPLACE(update_stmt3, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'],126)), '''''''','''''''''''') + '''''''', ''NULL'')) '
WHEN @COLUMN_NAME_DATA_TYPE IN ('uniqueidentifier')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt2 = REPLACE(update_stmt2, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')), update_stmt3 = REPLACE(update_stmt3, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')) '
WHEN @COLUMN_NAME_DATA_TYPE IN ('text','ntext')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt2 = REPLACE(update_stmt2, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')), update_stmt3 = REPLACE(update_stmt3, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')) '
WHEN @COLUMN_NAME_DATA_TYPE IN ('xxxx','yyyy')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt2 = REPLACE(update_stmt2, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')), update_stmt3 = REPLACE(update_stmt3, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')) '
WHEN @COLUMN_NAME_DATA_TYPE IN ('binary','varbinary')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt2 = REPLACE(update_stmt2, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')), update_stmt3 = REPLACE(update_stmt3, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')) '
WHEN @COLUMN_NAME_DATA_TYPE IN ('XML','xml')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt2 = REPLACE(update_stmt2, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'],0)), '''''''','''''''''''') + '''''''', ''NULL'')), update_stmt3 = REPLACE(update_stmt3, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'],0)), '''''''','''''''''''') + '''''''', ''NULL'')) '
WHEN @COLUMN_NAME_DATA_TYPE IN ('datetime','smalldatetime')
THEN N'UPDATE ' + @tmp_table + N' SET update_stmt2 = REPLACE(update_stmt2, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'],121)), '''''''','''''''''''') + '''''''', ''NULL'')), update_stmt3 = REPLACE(update_stmt3, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'],121)), '''''''','''''''''''') + '''''''', ''NULL'')) '
ELSE
N'UPDATE ' + @tmp_table + N' SET update_stmt2 = REPLACE(update_stmt2, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')), update_stmt3 = REPLACE(update_stmt3, ''{' + @COLUMN_NAME + N'}'', ISNULL('''''''' + REPLACE(RTRIM(CONVERT(NVARCHAR(MAX),[' + @COLUMN_NAME + N'])), '''''''','''''''''''') + '''''''', ''NULL'')) '
END
EXEC (@sql)
----print @sql
FETCH NEXT FROM c_columns INTO @COLUMN_NAME, @COLUMN_NAME_DATA_TYPE
END
CLOSE c_columns
DEALLOCATE c_columns
SET @sql = 'Select * from ' + @tmp_table + ';'
--exec (@sql)
SELECT @sql = N'
IF OBJECT_ID(''' + @tmp_table + N''', ''U'') IS NOT NULL
BEGIN
SELECT ''USE ' + DB_NAME() + ''' as executelist
UNION ALL
SELECT ''GO '' as executelist
UNION ALL
SELECT '' /*PRESCRIPT CHECK */ ' + replace(@fullquery,'''','''''')+''' as executelist
UNION ALL
SELECT update_stmt1 as executelist FROM ' + @tmp_table + N' where update_stmt1 is not null
UNION ALL
SELECT update_stmt2 as executelist FROM ' + @tmp_table + N' where update_stmt2 is not null
UNION ALL
SELECT isnull(update_stmt3, '' add more columns inn query please'') as executelist FROM ' + @tmp_table + N' where update_stmt3 is not null
UNION ALL
SELECT ''--EXEC usp_AddInstalledScript 5, 5, 1, 1, 1, ''''' + @tmp_table + '.sql'''', 2 '' as executelist
UNION ALL
SELECT '' /*VERIFY WITH: */ ' + replace(@fullquery,'''','''''')+''' as executelist
UNION ALL
SELECT ''-- SCRIPT LOCATION: F:\CopyPaste\++Distributionpoint++\Release_Management\' + @tmp_table + '.sql'' as executelist
END'
exec (@sql)
SET @sql = 'DROP TABLE ' + @tmp_table + ';'
exec (@sql)
To complement this answer
If you need to parse quotes escaped with another quote, example:
"some ""value"" that is on xlsx file",123
You can use
function parse(text) {
const csvExp = /(?!\s*$)\s*(?:'([^'\\]*(?:\\[\S\s][^'\\]*)*)'|"([^"\\]*(?:\\[\S\s][^"\\]*)*)"|"([^""]*(?:"[\S\s][^""]*)*)"|([^,'"\s\\]*(?:\s+[^,'"\s\\]+)*))\s*(?:,|$)/g;
const values = [];
text.replace(csvExp, (m0, m1, m2, m3, m4) => {
if (m1 !== undefined) {
values.push(m1.replace(/\\'/g, "'"));
}
else if (m2 !== undefined) {
values.push(m2.replace(/\\"/g, '"'));
}
else if (m3 !== undefined) {
values.push(m3.replace(/""/g, '"'));
}
else if (m4 !== undefined) {
values.push(m4);
}
return '';
});
if (/,\s*$/.test(text)) {
values.push('');
}
return values;
}
The answer you want depends very much on how and why you want to break up the data frame.
For example, if you want to leave out some variables, you can create new data frames from specific columns of the database. The subscripts in brackets after the data frame refer to row and column numbers. Check out Spoetry for a complete description.
newdf <- mydf[,1:3]
Or, you can choose specific rows.
newdf <- mydf[1:3,]
And these subscripts can also be logical tests, such as choosing rows that contain a particular value, or factors with a desired value.
What do you want to do with the chunks left over? Do you need to perform the same operation on each chunk of the database? Then you'll want to ensure that the subsets of the data frame end up in a convenient object, such as a list, that will help you perform the same command on each chunk of the data frame.
For those who use docker-compose
, it’s really easy to set a password without any config file like redis.conf
. Here’s how you would normally use the official Redis image:
redis:
image: 'redis:4-alpine'
ports:
- '6379:6379'
And here’s all you need to change to set a custom password:
redis:
image: 'redis:4-alpine'
command: redis-server --requirepass yourpassword
ports:
- '6379:6379'
Everything will start up as normal and your Redis server will be protected by a password.
For details, this blog post seems to support the idea.
You may use the C atoi function ( in a try / catch clause because the conversion isn't always possible) But there's nothing in the MFC classes to do it better.
It is a build target that is not a filename.
What I'd try first is
to drop all constraints, indexes, triggers and full text indexes first before you update.
If above wasn't performant enough, my next move would be
to create a CSV file with 12 million records and bulk import it using bcp.
Lastly, I'd create a new heap table (meaning table with no primary key) with no indexes on a different filegroup, populate it with -1. Partition the old table, and add the new partition using "switch".
To check input type
<!DOCTYPE html>
<html>
<body>
<input type=number id="txtinp">
<button onclick=checktype()>Try it</button>
<script>
function checktype()
{
alert(document.getElementById("txtinp").type);
}
</script>
</body>
</html>
It's a textual encoding of binary data where the resultant text has nothing but letters, numbers and the symbols "+", "/" and "=". It's a convenient way to store/transmit binary data over media that is specifically used for textual data.
But why Base-64? The two alternatives for converting binary data into text that immediately spring to mind are:
Base-64 maps 3 bytes (8 x 3 = 24 bits) in 4 characters that span 6-bits (6 x 4 = 24 bits). The result looks something like "TWFuIGlzIGRpc3Rpb...". Therefore the bloating is only a mere 4/3 = 1.3333333 times the original.
If it has a UI, it has an "API". Let's use the following example: I want to publish the pic I use in any new blog post I create. Let's assume is Wordpress.
Just update your eclipse.ini file (you can find it in the root-directory of eclipse) by this:
-vm
path/javaw.exe
for example:
-vm
C:/Program Files/Java/jdk1.7.0_09/jre/bin/javaw.exe
The first maximum you would hit is the length of a String which is 231-1 digits. It's much smaller than the maximum of a BigInteger but IMHO it loses much of its value if it can't be printed.
?modestbranding=1&autohide=1&showinfo=0&controls=0
autohide=1
is something that I never found... but it was the key :) I hope it's help
If you already have a test class, but missing the JUnit library dependency, please refer to Configuring Libraries for Unit Testing documentation section. Pressing Alt+Enter on the red code should give you an intention action to add the missing jar.
However, IDEA offers much more. If you don't have a test class yet and want to create one for any of the source classes, see instructions below.
You can use the Create Test intention action by pressing Alt+Enter while standing on the name of your class inside the editor or by using Ctrl+Shift+T keyboard shortcut.
A dialog appears where you select what testing framework to use and press Fix button for the first time to add the required library jars to the module dependencies. You can also select methods to create the test stubs for.
You can find more details in the Testing help section of the on-line documentation.
i saw into Eclipse > Preferences>installed JREs > JRE Definition i found the directory of java_home so it's /Library/Java/JavaVirtualMachines/jdk1.7.0_17.jdk/Contents/Home
Here's one approach:
SELECT cur.textID, cur.fromEmail, cur.subject,
cur.timestamp, cur.read
FROM incomingEmails cur
LEFT JOIN incomingEmails next
on cur.fromEmail = next.fromEmail
and cur.timestamp < next.timestamp
WHERE next.timestamp is null
and cur.toUserID = '$userID'
ORDER BY LOWER(cur.fromEmail)
Basically, you join the table on itself, searching for later rows. In the where clause you state that there cannot be later rows. This gives you only the latest row.
If there can be multiple emails with the same timestamp, this query would need refining. If there's an incremental ID column in the email table, change the JOIN like:
LEFT JOIN incomingEmails next
on cur.fromEmail = next.fromEmail
and cur.id < next.id
http://www.gnu.org/software/hello/manual/libc/Variable-Arguments-Output.html gives the following example to print to stderr. You can modify it to use your log function instead:
#include <stdio.h>
#include <stdarg.h>
void
eprintf (const char *template, ...)
{
va_list ap;
extern char *program_invocation_short_name;
fprintf (stderr, "%s: ", program_invocation_short_name);
va_start (ap, template);
vfprintf (stderr, template, ap);
va_end (ap);
}
Instead of vfprintf you will need to use vsprintf where you need to provide an adequate buffer to print into.
Let me give you another tutorial written by me. It answers your question, but also makes an explanation why we are doing some of the things. I also tried to make it concise.
So you have a list_of_documents
which is just an array of strings and another document
which is just a string. You need to find such document from the list_of_documents
that is the most similar to document
.
Let's combine them together: documents = list_of_documents + [document]
Let's start with dependencies. It will become clear why we use each of them.
from nltk.corpus import stopwords
import string
from nltk.tokenize import wordpunct_tokenize as tokenize
from nltk.stem.porter import PorterStemmer
from sklearn.feature_extraction.text import TfidfVectorizer
from scipy.spatial.distance import cosine
One of the approaches that can be uses is a bag-of-words approach, where we treat each word in the document independent of others and just throw all of them together in the big bag. From one point of view, it looses a lot of information (like how the words are connected), but from another point of view it makes the model simple.
In English and in any other human language there are a lot of "useless" words like 'a', 'the', 'in' which are so common that they do not possess a lot of meaning. They are called stop words and it is a good idea to remove them. Another thing that one can notice is that words like 'analyze', 'analyzer', 'analysis' are really similar. They have a common root and all can be converted to just one word. This process is called stemming and there exist different stemmers which differ in speed, aggressiveness and so on. So we transform each of the documents to list of stems of words without stop words. Also we discard all the punctuation.
porter = PorterStemmer()
stop_words = set(stopwords.words('english'))
modified_arr = [[porter.stem(i.lower()) for i in tokenize(d.translate(None, string.punctuation)) if i.lower() not in stop_words] for d in documents]
So how will this bag of words help us? Imagine we have 3 bags: [a, b, c]
, [a, c, a]
and [b, c, d]
. We can convert them to vectors in the basis [a, b, c, d]
. So we end up with vectors: [1, 1, 1, 0]
, [2, 0, 1, 0]
and [0, 1, 1, 1]
. The similar thing is with our documents (only the vectors will be way to longer). Now we see that we removed a lot of words and stemmed other also to decrease the dimensions of the vectors. Here there is just interesting observation. Longer documents will have way more positive elements than shorter, that's why it is nice to normalize the vector. This is called term frequency TF, people also used additional information about how often the word is used in other documents - inverse document frequency IDF. Together we have a metric TF-IDF which have a couple of flavors. This can be achieved with one line in sklearn :-)
modified_doc = [' '.join(i) for i in modified_arr] # this is only to convert our list of lists to list of strings that vectorizer uses.
tf_idf = TfidfVectorizer().fit_transform(modified_doc)
Actually vectorizer allows to do a lot of things like removing stop words and lowercasing. I have done them in a separate step only because sklearn does not have non-english stopwords, but nltk has.
So we have all the vectors calculated. The last step is to find which one is the most similar to the last one. There are various ways to achieve that, one of them is Euclidean distance which is not so great for the reason discussed here. Another approach is cosine similarity. We iterate all the documents and calculating cosine similarity between the document and the last one:
l = len(documents) - 1
for i in xrange(l):
minimum = (1, None)
minimum = min((cosine(tf_idf[i].todense(), tf_idf[l + 1].todense()), i), minimum)
print minimum
Now minimum will have information about the best document and its score.
While React.js is a parent Javascript library for developing web applications.
While you use tags like <View>
, <Text>
very frequently in React-Native, React.js uses web html tags like <div>
<h1>
<h2>
, which are only synonyms in dictionary of web/mobile developments.
For React.js you need DOM for path rendering of html tags, while for mobile application: React-Native uses AppRegistry to register your app.
I hope this is an easy explanation for quick differences/similarities in React.js and React-Native.
Write one! Wrapping JC's answer gives me:
randomRows = function(df,n){
return(df[sample(nrow(df),n),])
}
Now make it better by checking first if n<=nrow(df) and stopping with an error.
ECMAScript is the language, whereas JavaScript, JScript, and even ActionScript 3 are called "dialects". Wikipedia sheds some light on this.
Steps that worked for me on Ubuntu 8.04.2
to remove postgres 8.3
List All Postgres related packages
dpkg -l | grep postgres
ii postgresql 8.3.17-0ubuntu0.8.04.1 object-relational SQL database (latest versi
ii postgresql-8.3 8.3.9-0ubuntu8.04 object-relational SQL database, version 8.3
ii postgresql-client 8.3.9-0ubuntu8.04 front-end programs for PostgreSQL (latest ve
ii postgresql-client-8.3 8.3.9-0ubuntu8.04 front-end programs for PostgreSQL 8.3
ii postgresql-client-common 87ubuntu2 manager for multiple PostgreSQL client versi
ii postgresql-common 87ubuntu2 PostgreSQL database-cluster manager
ii postgresql-contrib 8.3.9-0ubuntu8.04 additional facilities for PostgreSQL (latest
ii postgresql-contrib-8.3 8.3.9-0ubuntu8.04 additional facilities for PostgreSQL
Remove all above listed
sudo apt-get --purge remove postgresql postgresql-8.3 postgresql-client postgresql-client-8.3 postgresql-client-common postgresql-common postgresql-contrib postgresql-contrib-8.3
Remove the following folders
sudo rm -rf /var/lib/postgresql/
sudo rm -rf /var/log/postgresql/
sudo rm -rf /etc/postgresql/
For me, it was... the AntiVirus! Kaspersky Endpoint security 10. It seems that the frequent compilations and the changing of the exe, caused it to block the file.
You can use sys.argv
to get the arguments as a list.
If you need to access individual elements, you can use
sys.argv[i]
where i
is index, 0
will give you the python filename being executed. Any index after that are the arguments passed.
simplest way in Python to get the mode of an list or array a
import statistics
print("mode = "+str(statistics.(mode(a)))
That's it
Just a word of warnings. I finally got around the problem with Firefox and CORS.
The solution for me was this post
However Firefox was behaving really, really strange after setting those headers on the Apache server (in the folder .htaccess).
I added a lot of console.log("Hi FF, you are here A")
etc to see what was going on.
At first it looked like it hanged on xhr.send()
. But then I discovered it did not get to the this statement. I placed another console.log
right before it and did not get there - even though there was nothing between the last console.log
and the new one. It just stopped between two console.log
.
Reordering lines, deleting, to see if there was any strange character in the file. I found nothing.
Restarting Firefox fixed the trouble.
Yes, I should file a bug. It is just that it is so strange so don't know how to reproduce it.
NOTICE: And, oh, I just did the Header always set
parts, not the Rewrite*
part!
Some are saying that git commit -am
will do the trick. This won't work because it can only commit changes on tracked files, but it doesn't add new files. Source.
After some research I figured that there is no such command to do that, but you can write a script on your .bashrc
or .bash_profile
depending on your OS.
I will share the one I use:
function gac {
if [[ $# -eq 0 ]]
then git add . && git commit
else
git add . && git commit -m "$*"
fi
}
With this all your changes will be added and committed, you can just type gac
and you will be prompted to write the commit message
Or you can type your commit message directly gac Hello world
, all your changes will be added and your commit message will be Hello world, note that ""
are not used
I see only hacks in the answers, but there's actually a builtin solution very easy to use : Basically you can capture the focus element like this:
const focusedElement = document.activeElement
https://developer.mozilla.org/en-US/docs/Web/API/DocumentOrShadowRoot/activeElement
Well, REST by design is stateless. By adding session (or anything else of that kind) you are making it stateful and defeating any purpose of having a RESTful API.
The whole idea of RESTful service is that every resource is uniquely addressable using a universal syntax for use in hypermedia links and each HTTP request should carry enough information by itself for its recipient to process it to be in complete harmony with the stateless nature of HTTP".
So whatever you are trying to do with Web API here, should most likely be re-architectured if you wish to have a RESTful API.
With that said, if you are still willing to go down that route, there is a hacky way of adding session to Web API, and it's been posted by Imran here http://forums.asp.net/t/1780385.aspx/1
Code (though I wouldn't really recommend that):
public class MyHttpControllerHandler
: HttpControllerHandler, IRequiresSessionState
{
public MyHttpControllerHandler(RouteData routeData): base(routeData)
{ }
}
public class MyHttpControllerRouteHandler : HttpControllerRouteHandler
{
protected override IHttpHandler GetHttpHandler(RequestContext requestContext)
{
return new MyHttpControllerHandler(requestContext.RouteData);
}
}
public class ValuesController : ApiController
{
public string GET(string input)
{
var session = HttpContext.Current.Session;
if (session != null)
{
if (session["Time"] == null)
{
session["Time"] = DateTime.Now;
}
return "Session Time: " + session["Time"] + input;
}
return "Session is not availabe" + input;
}
}
and then add the HttpControllerHandler to your API route:
route.RouteHandler = new MyHttpControllerRouteHandler();
This might work for you:
OIFS=$IFS; IFS=$'\n'; array=($(ls -ls)); IFS=$OIFS; echo "${array[1]}"
I'm adding an answer for anyone using Doug Hellmann's excellent virtualenvwrapper specifically since the existing answers didn't do it for me.
Some context:
python3 -m venv
, it doesn't support Python 2 environmentsmkproject
which creates the virtual environment, creates an empty project directory, and cds into itworkon
command to activate any project irrespective of Python versionDirections:
Let's say your existing project is named foo
and is currently running Python 2 (mkproject -p python2 foo
), though the commands are the same whether upgrading from 2.x to 3.x, 3.6.0 to 3.6.1, etc. I'm also assuming you're currently inside the activated virtual environment.
1. Deactivate and remove the old virtual environment:
$ deactivate
$ rmvirtualenv foo
Note that if you've added any custom commands to the hooks (e.g., bin/postactivate
) you'd need to save those before removing the environment.
2. Stash the real project in a temp directory:
$ cd ..
$ mv foo foo-tmp
3. Create the new virtual environment (and project dir) and activate:
$ mkproject -p python3 foo
4. Replace the empty generated project dir with real project, change back into project dir:
$ cd ..
$ mv -f foo-tmp foo
$ cdproject
5. Re-install dependencies, confirm new Python version, etc:
$ pip install -r requirements.txt
$ python --version
If this is a common use case, I'll consider opening a PR to add something like $ upgradevirtualenv
/ $ upgradeproject
to virtualenvwrapper.
Be sure to have both objects created in controller: @post
and @comment
for the post, eg:
@post = Post.find params[:post_id]
@comment = Comment.new(:post=>@post)
Then in view:
<%= form_for([@post, @comment]) do |f| %>
Be sure to explicitly define the array in the form_for, not just comma separated like you have above.
I think you are putting one too many curly brackets. Try this
@if($user->status=='waiting')
<td><a href="#" class="viewPopLink btn btn-default1" role="button" data-id="{!! $user->travel_id !!}" data-toggle="modal" data-target="#myModal">Approve/Reject</a> </td>
@else
<td>{!! $user->status !!}</td>
@endif
If you want to disable unreferenced local variable
write in some header
template<class T>
void ignore (const T & ) {}
and use
catch(const Except & excpt) {
ignore(excpt); // No warning
// ...
}
Setting Tomcat to listen to port 80 is WRONG , for development the 8080 is a good port to use. For production use, just set up an apache that shall forward your requests to your tomcat. Here is a how to.
SELECT CONVERT(VARCHAR(2),DATEPART("dd",doj)) +
'/' + CONVERT(VARCHAR(2),DATEPART("mm",doj)) +
'/' + CONVERT(VARCHAR(4),DATEPART("yy",doj)) FROM emp
i think it is possible by using below technique
`ts_create` timestamp NOT NULL DEFAULT '0000-00-00 00:00:00',
`ts_update` timestamp NOT NULL DEFAULT CURRENT_TIMESTAMP ON UPDATE CURRENT_TIMESTAMP
You will have to modify the below line:
<li><a href="#" data-toggle="modal" data-target="modalRegister">Register</a></li>
modalRegister
is the ID and hence requires a preceding #
for ID reference in html.
So, the modified html code snippet would be as follows:
<li><a href="#" data-toggle="modal" data-target="#modalRegister">Register</a></li>
This is what Wikipedia says about finally clause:
More common is a related clause (finally, or ensure) that is executed whether an exception occurred or not, typically to release resources acquired within the body of the exception-handling block.
Let's dissect your program.
try {
System.out.print(1);
q();
}
So, 1
will be output into the screen, then q()
is called. In q()
, an exception is thrown. The exception is then caught by Exception y
but it does nothing. A finally clause is then executed (it has to), so, 3
will be printed to screen. Because (in method q()
there's an exception thrown in the finally clause, also q()
method passes the exception to the parent stack (by the throws Exception
in the method declaration) new Exception()
will be thrown and caught by catch ( Exception i )
, MyExc2
exception will be thrown (for now add it to the exception stack), but a finally in the main
block will be executed first.
So in,
catch ( Exception i ) {
throw( new MyExc2() );
}
finally {
System.out.print(2);
throw( new MyExc1() );
}
A finally clause is called...(remember, we've just caught Exception i
and thrown MyExc2
) in essence, 2
is printed on screen...and after the 2
is printed on screen, a MyExc1
exception is thrown. MyExc1
is handled by the public static void main(...)
method.
Output:
"132Exception in thread main MyExc1"
Lecturer is correct! :-)
In essence, if you have a finally in a try/catch clause, a finally will be executed (after catching the exception before throwing the caught exception out)
Works with VS 2017
I did installed Visual Studio Command Prompt (devCmd) extension tool.
You can download it here: https://marketplace.visualstudio.com/items?itemName=ShemeerNS.VisualStudioCommandPromptdevCmd#review-details
Double click on the file, make sure IDE is closed during installation.
Open visual studio and Run Developer Command Prompt from VS2017
When I inspected the problem, I have noticed that the following dll were missing in the output folder. The simple solution is copy Entityframework.dll and Entityframework.sqlserver.dll with the app.config to the output folder if the application is on debug mode. At the same time change, the build option parameter "Copy to output folder" of app.config to copy always. This will solve your problem.
Try the line-height property.
For example, 12px font-size and 4px distant from the bottom and upper lines:
line-height: 20px; /* 4px +12px + 4px */
Or with em
units
line-height: 1.7em; /* 1em = 12px in this case. 20/12 == 1.666666 */
Use cURL:
*/5 * * * * curl http://example.com/check/
What about this?
switch (true)
{
case (age >= 1 && age <= 8):
MessageBox.Show("You are only " + age + " years old\n You must be kidding right.\nPlease fill in your *real* age.");
break;
case (age >= 9 && age <= 15):
MessageBox.Show("You are only " + age + " years old\n That's too young!");
break;
case (age >= 16 && age <= 100):
MessageBox.Show("You are " + age + " years old\n Perfect.");
break;
default:
MessageBox.Show("You an old person.");
break;
}
Cheers
It is possible to play a local video file.
<input type="file" accept="video/*"/>
<video controls autoplay></video>
When a file is selected via the input
element:
input.files
FileListvideo.src
propertyLean back and watch :)
http://jsfiddle.net/dsbonev/cCCZ2/embedded/result,js,html,css/
(function localFileVideoPlayer() {_x000D_
'use strict'_x000D_
var URL = window.URL || window.webkitURL_x000D_
var displayMessage = function(message, isError) {_x000D_
var element = document.querySelector('#message')_x000D_
element.innerHTML = message_x000D_
element.className = isError ? 'error' : 'info'_x000D_
}_x000D_
var playSelectedFile = function(event) {_x000D_
var file = this.files[0]_x000D_
var type = file.type_x000D_
var videoNode = document.querySelector('video')_x000D_
var canPlay = videoNode.canPlayType(type)_x000D_
if (canPlay === '') canPlay = 'no'_x000D_
var message = 'Can play type "' + type + '": ' + canPlay_x000D_
var isError = canPlay === 'no'_x000D_
displayMessage(message, isError)_x000D_
_x000D_
if (isError) {_x000D_
return_x000D_
}_x000D_
_x000D_
var fileURL = URL.createObjectURL(file)_x000D_
videoNode.src = fileURL_x000D_
}_x000D_
var inputNode = document.querySelector('input')_x000D_
inputNode.addEventListener('change', playSelectedFile, false)_x000D_
})()
_x000D_
video,_x000D_
input {_x000D_
display: block;_x000D_
}_x000D_
_x000D_
input {_x000D_
width: 100%;_x000D_
}_x000D_
_x000D_
.info {_x000D_
background-color: aqua;_x000D_
}_x000D_
_x000D_
.error {_x000D_
background-color: red;_x000D_
color: white;_x000D_
}
_x000D_
<h1>HTML5 local video file player example</h1>_x000D_
<div id="message"></div>_x000D_
<input type="file" accept="video/*" />_x000D_
<video controls autoplay></video>
_x000D_
If you're just searching for a string rather than a more complicated regular expression, you can use indexOf()
- but remember to lowercase both strings first because indexOf()
is case sensitive:
var string="Stackoverflow is the BEST";
var searchstring="best";
// lowercase both strings
var lcString=string.toLowerCase();
var lcSearchString=searchstring.toLowerCase();
var result = lcString.indexOf(lcSearchString)>=0;
alert(result);
Or in a single line:
var result = string.toLowerCase().indexOf(searchstring.toLowerCase())>=0;
A variation on your idea is to have a server-side USER-AGENT detector
that will figure out what style sheet to attach to the page. This way you can have a firefox.css, ie.css, opera.css, etc
.
You can accomplish a similar thing in Javascript itself, although you may not regard it as clean.
I have done a similar thing by having a default.css
which includes all common styles and then specific style sheets
are added to override, or enhance the defaults.
If the column is a timestamp you can do the following:
if(fromDate!=null){
criteria.add(Restrictions.sqlRestriction("TRUNC(COLUMN) >= TO_DATE('" + dataFrom + "','dd/mm/yyyy')"));
}
if(toDate!=null){
criteria.add(Restrictions.sqlRestriction("TRUNC(COLUMN) <= TO_DATE('" + dataTo + "','dd/mm/yyyy')"));
}
resultDB = criteria.list();
Actually it is possible:
Header('Location: '.$_SERVER['PHP_SELF']);
Exit(); //optional
And it will reload the same page.
It is a simple 3 Step Process :
This, but also - the invited user needs to be logged in to Github before clicking the invitation link in their email or they'll get a 404 error.
you just need to put this
($('#{{ form.email.id_for_label }}').attr("placeholder","Work email address"));
($('#{{ form.password1.id_for_label }}').attr("placeholder","Password"));
# -i (inplace)
LANG=C sed -i -E "s|[\d128-\d255]||g" /path/to/file(s)
The LANG=C
part's role is to avoid a Invalid collation character
error.
Based on Ivan's answer and Patrick's comment.
If your code is prepared to deal with overflow errors, you can throw an exception if data
is too large.
size_t data = 99999999;
if ( data > INT_MAX )
{
throw std::overflow_error("data is larger than INT_MAX");
}
int convertData = static_cast<int>(data);
Refreshing attribute on events is a better approach than scanning value every tenth of a second...
http://jsfiddle.net/yqdcsqzz/3/
inputElement.onchange = function()
{
this.setAttribute('value', this.value);
};
inputElement.onkeyup = function()
{
this.setAttribute('value', this.value);
};
I was facing the same problem when import projects into IntelliJ.
for in my case first, check SDK details and check you have configured JDK correctly or not.
Go to File-> Project Structure-> platform Settings-> SDKs
Check your JDK is correct or not.
Next, I Removed project from IntelliJ and delete all IntelliJ and IDE related files and folder from the project folder (.idea, .settings, .classpath, dependency-reduced-pom). Also, delete the target folder and re-import the project.
The above solution worked in my case.
The Error is here
lastrow = wsPOR.Range("A" & Rows.Count).End(xlUp).Row + 1
wsPOR is a workbook and not a worksheet. If you are working with "Sheet1" of that workbook then try this
lastrow = wsPOR.Sheets("Sheet1").Range("A" & _
wsPOR.Sheets("Sheet1").Rows.Count).End(xlUp).Row + 1
Similarly
wsPOR.Range("A2:G" & lastrow).Select
should be
wsPOR.Sheets("Sheet1").Range("A2:G" & lastrow).Select
Try
var fu1 = document.getElementById("FileUpload1").value;
What regex engine are you using? Most of them will support the following expression:
\{\d+:\d+\}
The \d
is actually shorthand for [0-9]
, but the important part is the addition of +
which means "one or more".
After reading your previous comments, it's clear that you misunderstood the Hash::make function. Hash::make uses bcrypt hashing. By design, this means that every time you run Hash::make('password'), the result will be different (due to random salting). That's why you can't verify the password by simply checking the hashed password against the hashed input.
The proper way to validate a hash is by using:
Hash::check($passwordToCheck, $hashedPassword);
So, for example, your login function would be implemented like this:
public static function login($email, $password) {
$user = User::whereEmail($email)->first();
if ( !$user ) return null; //check if user exists
if ( Hash::check($password, $user->password) ) {
return $user;
} else return null;
}
And then you'd call it like this:
$user = User::login('[email protected]', 'password');
if ( !$user ) echo "Invalid credentials.";
else echo "First name: $user->firstName";
I recommend reviewing the Laravel security documentation, as functions already exist in Laravel to perform this type of authorization.
Furthermore, if your custom-made hashing algorithm generates the same hash every time for a given input, it's a security risk. A good one-way hashing algorithm should use random salting.
If you want to further automate this, because you are checking your module into version control, and don't want to rely upon devs remembering to npm link, you can add this to your package.json "scripts" section:
"scripts": {
"postinstall": "npm link ../somelocallib",
"postupdate": "npm link ../somelocallib"
}
This feels beyond hacky, but it seems to "work". Got the tip from this npm issue: https://github.com/npm/npm/issues/1558#issuecomment-12444454
I'm using Angular 6. I followed Mav55's answer and it worked. However I wanted to make sure if fixture.detectChanges();
was really necessary so I removed it and it still worked. Then I removed tick();
to see if it worked and it did. Finally I removed the test from the fakeAsync()
wrap, and surprise, it worked.
So I ended up with this:
it('should call onClick method', () => {
const onClickMock = spyOn(component, 'onClick');
fixture.debugElement.query(By.css('button')).triggerEventHandler('click', null);
expect(onClickMock).toHaveBeenCalled();
});
And it worked just fine.
Question: Check whether a $_POST value is empty.
Translation: Check to see if an array key/index has a value associated with it.
Answer: Depends on your emphasis on security. Depends on what is allowed as valid input.
1. Some people say use empty().
From the PHP Manual:
"[Empty] determines whether a variable is considered to be empty. A variable is considered empty if it does not exist or if its value equals FALSE."
The following are thus considered empty.
"" (an empty string)
0 (0 as an integer)
0.0 (0 as a float)
"0" (0 as a string)
NULL
FALSE
array() (an empty array)
$var; (a variable declared, but without a value)
If none of these values are valid for your input control, then empty()
would work. The problem here is that empty()
might be too broad to be used consistently (the same way, for the same reason, on different input control submissions to $_POST
or $_GET
). A good use of empty()
is to check if an entire array is empty (has no elements).
2. Some people say use isset().
isset()
(a language construct) cannot operate on entire arrays, as in isset($myArray)
. It can only operate on variables and array elements (via the index/key): isset($var)
and isset($_POST['username'])
. The isset()
language construct does two things. First it checks to see if a variable or array index/key has a value associated with it. Second, it checks to make sure that value is not equal to the PHP NULL value.
In short, the most accurate check can be accomplished best with isset()
, as some input controls do not even register with $_POST when they are not selected or checked. I have never known a form that submitted the PHP NULL value. None of mine do, so I use isset()
to check if a $_POST
key has no value associated with it (and that is not NULL). isset()
is a much stricter test of emptiness (in the sense of your question) than empty()
.
3. Some people say just do if($var), if($myArray), or if($myArray['userName']) to determine emptiness.
You can test anything that evaluates to true or false in an if statement. Empty arrays evaluate to false and so do variables that are not set. Variables that contain the PHP NULL value also evaluate to false. Unfortunately in this case, like with
empty()
, many more things also evaluate to false: 1. the empty string '', zero (0), zero.zero (0.0), the string zero '0', boolean false, and certain empty XML objects.--Doyle, Beginning PHP 5.3
In conclusion, use isset()
and consider combining it with other tests. Example:
May not work due to superglobal screwiness, but would work for other arrays without question.
if (is_array($_POST) && !empty($_POST)) {
// Now test for your successful controls in $_POST with isset()
}
Hence, why look for a value associated with a key before you even know for sure that $_POST
represents an array and has any values stored in it at all (something many people fail to consider)? Remember, people can send data to your form without using a web browser. You may one day get to the point of testing that $_POST
only has the allowed keys, but that conversation is for another day.
Useful reference:
you could also store your data in an service with an setter and get it over a getter
import { Injectable } from '@angular/core';
@Injectable()
export class StorageService {
public scope: Array<any> | boolean = false;
constructor() {
}
public getScope(): Array<any> | boolean {
return this.scope;
}
public setScope(scope: any): void {
this.scope = scope;
}
}
class test
{
void message()
{
System.out.println("super class");
}
}
class demo extends test
{
int z;
demo(int y)
{
super.message();
z=y;
System.out.println("re:"+z);
}
}
class free{
public static void main(String ar[]){
demo d=new demo(6);
}
}
Use String#slice
, also aliased as []
.
a = "hello there"
a[1] #=> "e"
a[1,3] #=> "ell"
a[1..3] #=> "ell"
a[6..-1] #=> "there"
a[6..] #=> "there" (requires Ruby 2.6+)
a[-3,2] #=> "er"
a[-4..-2] #=> "her"
a[12..-1] #=> nil
a[-2..-4] #=> ""
a[/[aeiou](.)\1/] #=> "ell"
a[/[aeiou](.)\1/, 0] #=> "ell"
a[/[aeiou](.)\1/, 1] #=> "l"
a[/[aeiou](.)\1/, 2] #=> nil
a["lo"] #=> "lo"
a["bye"] #=> nil
You can change the send line to this:
c.send(b'Thank you for connecting')
The b
makes it bytes instead.
command = "ls *"
def execute_state=sh(returnStdout: true, script: command)
but if the command failure the process will terminate
You can use a cursor and temp table approach so you aren't doing full table scan each time. What this would be doing is populating the temp table with all of your keywords, and then with each string in the @letters XML
, it would remove any records from the temp table. At the end, you only have records in your temp table that have each of your desired strings in it.
declare @letters xml
SET @letters = '<letters>
<letter>a</letter>
<letter>b</letter>
</letters>'
-- SELECTING LETTERS FROM THE XML
SELECT Letters.l.value('.', 'nvarchar(50)') AS letter
FROM @letters.nodes('/letters/letter') AS Letters(l)
-- CREATE A TEMP TABLE WE CAN DELETE FROM IF A RECORD DOESN'T HAVE THE LETTER
CREATE TABLE #TempResults (keywordID int not null, keyWord nvarchar(50) not null)
INSERT INTO #TempResults (keywordID, keyWord)
SELECT employeeID, firstName FROM Employee
-- CREATE A CURSOR, SO WE CAN LOOP THROUGH OUR LETTERS AND REMOVE KEYWORDS THAT DON'T MATCH
DECLARE Cursor_Letters CURSOR READ_ONLY
FOR
SELECT Letters.l.value('.', 'nvarchar(50)') AS letter
FROM @letters.nodes('/letters/letter') AS Letters(l)
DECLARE @letter varchar(50)
OPEN Cursor_Letters
FETCH NEXT FROM Cursor_Letters INTO @letter
WHILE (@@fetch_status <> -1)
BEGIN
IF (@@fetch_status <> -2)
BEGIN
DELETE FROM #TempResults
WHERE keywordID NOT IN
(SELECT keywordID FROM #TempResults WHERE keyWord LIKE '%' + @letter + '%')
END
FETCH NEXT FROM Cursor_Letters INTO @letter
END
CLOSE Cursor_Letters
DEALLOCATE Cursor_Letters
SELECT * FROM #TempResults
DROP Table #TempResults
GO
Using JavaScript
public ActionResult Index()
{
return Content("<script>window.location = 'http://www.example.com';</script>");
}
Note: As @Jeremy Ray Brown said , This is not the best option but you might find useful in some situations.
Hope this helps.
From the fine manual:
ALTER TABLE mytable ALTER COLUMN mycolumn DROP NOT NULL;
There's no need to specify the type when you're just changing the nullability.
If your server is still listening on port 80, check the permission on the DocumentRoot folder and if DirectoryIndex file existed.
You can construct a Map with a variable number of tuples. So use the map method on the collection to convert it into a collection of tuples and then use the : _* trick to convert the result into a variable argument.
scala> val list = List("this", "maps", "string", "to", "length") map {s => (s, s.length)}
list: List[(java.lang.String, Int)] = List((this,4), (maps,4), (string,6), (to,2), (length,6))
scala> val list = List("this", "is", "a", "bunch", "of", "strings")
list: List[java.lang.String] = List(this, is, a, bunch, of, strings)
scala> val string2Length = Map(list map {s => (s, s.length)} : _*)
string2Length: scala.collection.immutable.Map[java.lang.String,Int] = Map(strings -> 7, of -> 2, bunch -> 5, a -> 1, is -> 2, this -> 4)
Use the Maven debug option, ie mvn -X
:
Apache Maven 3.0.3 (r1075438; 2011-02-28 18:31:09+0100)
Maven home: /usr/java/apache-maven-3.0.3
Java version: 1.6.0_12, vendor: Sun Microsystems Inc.
Java home: /usr/java/jdk1.6.0_12/jre
Default locale: en_US, platform encoding: UTF-8
OS name: "linux", version: "2.6.32-32-generic", arch: "i386", family: "unix"
[INFO] Error stacktraces are turned on.
[DEBUG] Reading global settings from /usr/java/apache-maven-3.0.3/conf/settings.xml
[DEBUG] Reading user settings from /home/myhome/.m2/settings.xml
...
In this output, you can see that the settings.xml is loaded from /home/myhome/.m2/settings.xml
.
Try using System.getProperty("line.separator")
to get a new line.
Not a VIM user myself, so forgive me if this is totally offbase. But according to what I gather from the following VIM Tips site:
" where was an option set
:scriptnames : list all plugins, _vimrcs loaded (super)
:verbose set history? : reveals value of history and where set
:function : list functions
:func SearchCompl : List particular function
Not sure if it was because of my git version (1.7.2) or what, but the approaches listed above using machine name and IP options were not working for me. An additional detail that may/may not be important is that the repo was a bare repo that I had initialized and pushed to from a different machine.
I was trying to clone project1 as advised above with commands like:
$ git clone file:////<IP_ADDRESS>/home/user/git/project1
Cloning into project1...
fatal: '//<IP_ADDRESS>/home/user/git/project1' does not appear to be a git repository
fatal: The remote end hung up unexpectedly
and
$ git clone file:////<MACHINE_NAME>/home/user/git/project1
Cloning into project1...
fatal: '//<MACHINE_NAME>/home/user/git/project1' does not appear to be a git repository
fatal: The remote end hung up unexpectedly
What did work for me was something simpler:
$ git clone ../git/project1
Cloning into project1...
done.
Note - even though the repo being cloned from was bare, this did produce a 'normal' clone with all the actual code/image/resource files that I was hoping for (as opposed to the internals of the git repo).
When working with HTTP it's almost always more useful to refer to HttpURLConnection
rather than the base class URLConnection
(since URLConnection
is an abstract class when you ask for URLConnection.openConnection()
on a HTTP URL that's what you'll get back anyway).
Then you can instead of relying on URLConnection#setDoOutput(true)
to implicitly set the request method to POST instead do httpURLConnection.setRequestMethod("POST")
which some might find more natural (and which also allows you to specify other request methods such as PUT, DELETE, ...).
It also provides useful HTTP constants so you can do:
int responseCode = httpURLConnection.getResponseCode();
if (responseCode == HttpURLConnection.HTTP_OK) {
You basically effectively converted your date in a string format to a date object. If you print it out at that point, you will get the standard date formatting output. In order to format it after that, you then need to convert it back to a date object with a specified format (already specified previously)
String startDateString = "06/27/2007";
DateFormat df = new SimpleDateFormat("MM/dd/yyyy");
Date startDate;
try {
startDate = df.parse(startDateString);
String newDateString = df.format(startDate);
System.out.println(newDateString);
} catch (ParseException e) {
e.printStackTrace();
}
Below, I have written an answer for n
equals to 5, but you can apply same approach to draw DFAs for any value of n
and 'any positional number system' e.g binary, ternary...
First lean the term 'Complete DFA', A DFA defined on complete domain in d:Q × S?Q is called 'Complete DFA'. In other words we can say; in transition diagram of complete DFA there is no missing edge (e.g. from each state in Q there is one outgoing edge present for every language symbol in S). Note: Sometime we define partial DFA as d ? Q × S?Q (Read: How does “d:Q × S?Q” read in the definition of a DFA).
Step-1: When you divide a number ? by n
then reminder can be either 0, 1, ..., (n - 2) or (n - 1). If remainder is 0
that means ? is divisible by n
otherwise not. So, in my DFA there will be a state qr that would be corresponding to a remainder value r
, where 0 <= r <= (n - 1)
, and total number of states in DFA is n
.
After processing a number string ? over S, the end state is qr implies that ? % n => r (% reminder operator).
In any automata, the purpose of a state is like memory element. A state in an atomata stores some information like fan's switch that can tell whether the fan is in 'off' or in 'on' state. For n = 5, five states in DFA corresponding to five reminder information as follows:
Using above information, we can start drawing transition diagram TD of five states as follows:
Figure-1
So, 5 states for 5 remainder values. After processing a string ? if end-state becomes q0 that means decimal equivalent of input string is divisible by 5. In above figure q0 is marked final state as two concentric circle.
Additionally, I have defined a transition rule d:(q0, 0)?q0 as a self loop for symbol '0'
at state q0, this is because decimal equivalent of any string consist of only '0'
is 0 and 0 is a divisible by n
.
Step-2: TD above is incomplete; and can only process strings of '0'
s. Now add some more edges so that it can process subsequent number's strings. Check table below, shows new transition rules those can be added next step:
+-------------------------------------+ ¦Number¦Binary¦Remainder(%5)¦End-state¦ +------+------+-------------+---------¦ ¦One ¦1 ¦1 ¦q1 ¦ +------+------+-------------+---------¦ ¦Two ¦10 ¦2 ¦q2 ¦ +------+------+-------------+---------¦ ¦Three ¦11 ¦3 ¦q3 ¦ +------+------+-------------+---------¦ ¦Four ¦100 ¦4 ¦q4 ¦ +-------------------------------------+
'1'
there should be a transition rule d:(q0, 1)?q1 '10'
, end-state should be q2, and to process '10'
, we just need to add one more transition rule d:(q1, 0)?q2'11'
, end-state is q3, and we need to add a transition rule d:(q1, 1)?q3'100'
, end-state is q4. TD already processes prefix string '10'
and we just need to add a new transition rule d:(q2, 0)?q4Figure-2
Step-3: Five = 101
Above transition diagram in figure-2 is still incomplete and there are many missing edges, for an example no transition is defined for d:(q2, 1)-?. And the rule should be present to process strings like '101'
.
Because '101'
= 5 is divisible by 5, and to accept '101'
I will add d:(q2, 1)?q0 in above figure-2.
Path: ?(q0)-1?(q1)-0?(q2)-1?(q0)
with this new rule, transition diagram becomes as follows:
Figure-3
Below in each step I pick next subsequent binary number to add a missing edge until I get TD as a 'complete DFA'.
Step-4: Six = 110.
We can process '11'
in present TD in figure-3 as: ?(q0)-11?(q3) -0?(?). Because 6 % 5 = 1 this means to add one rule d:(q3, 0)?q1.
Figure-4
Step-5: Seven = 111
+--------------------------------------------------------------+ ¦Number¦Binary¦Remainder(%5)¦End-state¦ Path ¦ Add ¦ +------+------+-------------+---------+------------+-----------¦ ¦Seven ¦111 ¦7 % 5 = 2 ¦q2 ¦ q0-11?q3 ¦ q3-1?q2 ¦ +--------------------------------------------------------------+
Figure-5
Step-6: Eight = 1000
+----------------------------------------------------------+ ¦Number¦Binary¦Remainder(%5)¦End-state¦ Path ¦ Add ¦ +------+------+-------------+---------+----------+---------¦ ¦Eight ¦1000 ¦8 % 5 = 3 ¦q3 ¦q0-100?q4 ¦ q4-0?q3 ¦ +----------------------------------------------------------+
Figure-6
Step-7: Nine = 1001
+----------------------------------------------------------+ ¦Number¦Binary¦Remainder(%5)¦End-state¦ Path ¦ Add ¦ +------+------+-------------+---------+----------+---------¦ ¦Nine ¦1001 ¦9 % 5 = 4 ¦q4 ¦q0-100?q4 ¦ q4-1?q4 ¦ +----------------------------------------------------------+
Figure-7
In TD-7, total number of edges are 10 == Q × S = 5 × 2. And it is a complete DFA that can accept all possible binary strings those decimal equivalent is divisible by 5.
Step-1 Exactly same as for binary, use figure-1.
Step-2 Add Zero, One, Two
+------------------------------------------------------+ ¦Decimal¦Ternary¦Remainder(%5)¦End-state¦ Add ¦ +-------+-------+-------------+---------+--------------¦ ¦Zero ¦0 ¦0 ¦q0 ¦ d:(q0,0)?q0 ¦ +-------+-------+-------------+---------+--------------¦ ¦One ¦1 ¦1 ¦q1 ¦ d:(q0,1)?q1 ¦ +-------+-------+-------------+---------+--------------¦ ¦Two ¦2 ¦2 ¦q2 ¦ d:(q0,2)?q3 ¦ +------------------------------------------------------+
Figure-8
Step-3 Add Three, Four, Five
+-----------------------------------------------------+ ¦Decimal¦Ternary¦Remainder(%5)¦End-state¦ Add ¦ +-------+-------+-------------+---------+-------------¦ ¦Three ¦10 ¦3 ¦q3 ¦ d:(q1,0)?q3 ¦ +-------+-------+-------------+---------+-------------¦ ¦Four ¦11 ¦4 ¦q4 ¦ d:(q1,1)?q4 ¦ +-------+-------+-------------+---------+-------------¦ ¦Five ¦12 ¦0 ¦q0 ¦ d:(q1,2)?q0 ¦ +-----------------------------------------------------+
Figure-9
Step-4 Add Six, Seven, Eight
+-----------------------------------------------------+ ¦Decimal¦Ternary¦Remainder(%5)¦End-state¦ Add ¦ +-------+-------+-------------+---------+-------------¦ ¦Six ¦20 ¦1 ¦q1 ¦ d:(q2,0)?q1 ¦ +-------+-------+-------------+---------+-------------¦ ¦Seven ¦21 ¦2 ¦q2 ¦ d:(q2,1)?q2 ¦ +-------+-------+-------------+---------+-------------¦ ¦Eight ¦22 ¦3 ¦q3 ¦ d:(q2,2)?q3 ¦ +-----------------------------------------------------+
Figure-10
Step-5 Add Nine, Ten, Eleven
+-----------------------------------------------------+ ¦Decimal¦Ternary¦Remainder(%5)¦End-state¦ Add ¦ +-------+-------+-------------+---------+-------------¦ ¦Nine ¦100 ¦4 ¦q4 ¦ d:(q3,0)?q4 ¦ +-------+-------+-------------+---------+-------------¦ ¦Ten ¦101 ¦0 ¦q0 ¦ d:(q3,1)?q0 ¦ +-------+-------+-------------+---------+-------------¦ ¦Eleven ¦102 ¦1 ¦q1 ¦ d:(q3,2)?q1 ¦ +-----------------------------------------------------+
Figure-11
Step-6 Add Twelve, Thirteen, Fourteen
+------------------------------------------------------+ ¦Decimal ¦Ternary¦Remainder(%5)¦End-state¦ Add ¦ +--------+-------+-------------+---------+-------------¦ ¦Twelve ¦110 ¦2 ¦q2 ¦ d:(q4,0)?q2 ¦ +--------+-------+-------------+---------+-------------¦ ¦Thirteen¦111 ¦3 ¦q3 ¦ d:(q4,1)?q3 ¦ +--------+-------+-------------+---------+-------------¦ ¦Fourteen¦112 ¦4 ¦q4 ¦ d:(q4,2)?q4 ¦ +------------------------------------------------------+
Figure-12
Total number of edges in transition diagram figure-12 are 15 = Q × S = 5 * 3 (a complete DFA). And this DFA can accept all strings consist over {0, 1, 2} those decimal equivalent is divisible by 5.
If you notice at each step, in table there are three entries because at each step I add all possible outgoing edge from a state to make a complete DFA (and I add an edge so that qr state gets for remainder is r
)!
To add further, remember union of two regular languages are also a regular. If you need to design a DFA that accepts binary strings those decimal equivalent is either divisible by 3 or 5, then draw two separate DFAs for divisible by 3 and 5 then union both DFAs to construct target DFA (for 1 <= n <= 10 your have to union 10 DFAs).
If you are asked to draw DFA that accepts binary strings such that decimal equivalent is divisible by 5 and 3 both then you are looking for DFA of divisible by 15 ( but what about 6 and 8?).
Note: DFAs drawn with this technique will be minimized DFA only when there is no common factor between number n
and base e.g. there is no between 5 and 2 in first example, or between 5 and 3 in second example, hence both DFAs constructed above are minimized DFAs. If you are interested to read further about possible mini states for number n
and base b
read paper: Divisibility and State Complexity.
below I have added a Python script, I written it for fun while learning Python library pygraphviz. I am adding it I hope it can be helpful for someone in someway.
So we can apply above trick to draw DFA to recognize number strings in any base 'b'
those are divisible a given number 'n'
. In that DFA total number of states will be n
(for n
remainders) and number of edges should be equal to 'b' * 'n' — that is complete DFA: 'b' = number of symbols in language of DFA and 'n' = number of states.
Using above trick, below I have written a Python Script to Draw DFA for input base
and number
. In script, function divided_by_N
populates DFA's transition rules in base * number
steps. In each step-num, I convert num
into number string num_s
using function baseN()
. To avoid processing each number string, I have used a temporary data-structure lookup_table
. In each step, end-state for number string num_s
is evaluated and stored in lookup_table
to use in next step.
For transition graph of DFA, I have written a function draw_transition_graph
using Pygraphviz library (very easy to use). To use this script you need to install graphviz
. To add colorful edges in transition diagram, I randomly generates color codes for each symbol get_color_dict
function.
#!/usr/bin/env python
import pygraphviz as pgv
from pprint import pprint
from random import choice as rchoice
def baseN(n, b, syms="0123456789ABCDEFGHIJKLMNOPQRSTUVWXYZ"):
""" converts a number `n` into base `b` string """
return ((n == 0) and syms[0]) or (
baseN(n//b, b, syms).lstrip(syms[0]) + syms[n % b])
def divided_by_N(number, base):
"""
constructs DFA that accepts given `base` number strings
those are divisible by a given `number`
"""
ACCEPTING_STATE = START_STATE = '0'
SYMBOL_0 = '0'
dfa = {
str(from_state): {
str(symbol): 'to_state' for symbol in range(base)
}
for from_state in range(number)
}
dfa[START_STATE][SYMBOL_0] = ACCEPTING_STATE
# `lookup_table` keeps track: 'number string' -->[dfa]--> 'end_state'
lookup_table = { SYMBOL_0: ACCEPTING_STATE }.setdefault
for num in range(number * base):
end_state = str(num % number)
num_s = baseN(num, base)
before_end_state = lookup_table(num_s[:-1], START_STATE)
dfa[before_end_state][num_s[-1]] = end_state
lookup_table(num_s, end_state)
return dfa
def symcolrhexcodes(symbols):
"""
returns dict of color codes mapped with alphabets symbol in symbols
"""
return {
symbol: '#'+''.join([
rchoice("8A6C2B590D1F4E37") for _ in "FFFFFF"
])
for symbol in symbols
}
def draw_transition_graph(dfa, filename="filename"):
ACCEPTING_STATE = START_STATE = '0'
colors = symcolrhexcodes(dfa[START_STATE].keys())
# draw transition graph
tg = pgv.AGraph(strict=False, directed=True, decorate=True)
for from_state in dfa:
for symbol, to_state in dfa[from_state].iteritems():
tg.add_edge("Q%s"%from_state, "Q%s"%to_state,
label=symbol, color=colors[symbol],
fontcolor=colors[symbol])
# add intial edge from an invisible node!
tg.add_node('null', shape='plaintext', label='start')
tg.add_edge('null', "Q%s"%START_STATE,)
# make end acception state as 'doublecircle'
tg.get_node("Q%s"%ACCEPTING_STATE).attr['shape'] = 'doublecircle'
tg.draw(filename, prog='circo')
tg.close()
def print_transition_table(dfa):
print("DFA accepting number string in base '%(base)s' "
"those are divisible by '%(number)s':" % {
'base': len(dfa['0']),
'number': len(dfa),})
pprint(dfa)
if __name__ == "__main__":
number = input ("Enter NUMBER: ")
base = input ("Enter BASE of number system: ")
dfa = divided_by_N(number, base)
print_transition_table(dfa)
draw_transition_graph(dfa)
Execute it:
~/study/divide-5/script$ python script.py
Enter NUMBER: 5
Enter BASE of number system: 4
DFA accepting number string in base '4' those are divisible by '5':
{'0': {'0': '0', '1': '1', '2': '2', '3': '3'},
'1': {'0': '4', '1': '0', '2': '1', '3': '2'},
'2': {'0': '3', '1': '4', '2': '0', '3': '1'},
'3': {'0': '2', '1': '3', '2': '4', '3': '0'},
'4': {'0': '1', '1': '2', '2': '3', '3': '4'}}
~/study/divide-5/script$ ls
script.py filename.png
~/study/divide-5/script$ display filename
Output:
DFA accepting number strings in base 4 those are divisible by 5
Similarly, enter base = 4 and number = 7 to generate - dfa accepting number string in base '4' those are divisible by '7'
Btw, try changing filename
to .png
or .jpeg
.
References those I use to write this script:
➊ Function baseN
from "convert integer to a string in a given numeric base in python"
➋ To install "pygraphviz": "Python does not see pygraphviz"
➌ To learn use of Pygraphviz: "Python-FSM"
➍ To generate random hex color codes for each language symbol: "How would I make a random hexdigit code generator using .join and for loops?"
Alternatively you can do like this :
var _items = from a in StudentsGrades
group a by a.Name;
foreach (var _itemGroup in _items)
{
foreach (var _item in _itemGroup.OrderBy(a=>a.grade))
{
------------------------
--------------------------
}
}
Very simple, no library required:
var date = new Date();
var firstDay = new Date(date.getFullYear(), date.getMonth(), 1);
var lastDay = new Date(date.getFullYear(), date.getMonth() + 1, 0);
or you might prefer:
var date = new Date(), y = date.getFullYear(), m = date.getMonth();
var firstDay = new Date(y, m, 1);
var lastDay = new Date(y, m + 1, 0);
Some browsers will treat two digit years as being in the 20th century, so that:
new Date(14, 0, 1);
gives 1 January, 1914. To avoid that, create a Date then set its values using setFullYear:
var date = new Date();
date.setFullYear(14, 0, 1); // 1 January, 14
If you want to play nice with Windows file paths (under Cygwin) you can also try this:
fname=${fullfile##*[/|\\]}
This will account for backslash separators when using BaSH on Windows.
I know in the question you state "I assume JavaScript is used to create this effect" but CSS can be used too, an example is below.
CSS
.fancy-link {
color: #333333;
text-decoration: none;
transition: color 0.3s linear;
-webkit-transition: color 0.3s linear;
-moz-transition: color 0.3s linear;
}
.fancy-link:hover {
color: #F44336;
}
HTML
<a class="fancy-link" href="#">My Link</a>
And here is a JSFIDDLE for the above code!
Marcel in one of the answers points out you can "transition multiple CSS properties" you can also use "all" to effect the element with all your :hover styles like below.
CSS
.fancy-link {
color: #333333;
text-decoration: none;
transition: all 0.3s linear;
-webkit-transition: all 0.3s linear;
-moz-transition: all 0.3s linear;
}
.fancy-link:hover {
color: #F44336;
padding-left: 10px;
}
HTML
<a class="fancy-link" href="#">My Link</a>
And here is a JSFIDDLE for the "all" example!
This is one of the fundamentals of both languages that pointers evaluate to a type and value that can be used as a control expression, bool
in C++ and int
in C. Just use it.
Here, I summarize a few ways which may return different results to your question, so be careful:
# First assign your "id"s to an R object.
# Here's a hypothetical example:
id <- c("a","b","b","c","c","c","d","d","d","d")
#To return ALL MINUS ONE duplicated values:
id[duplicated(id)]
## [1] "b" "c" "c" "d" "d" "d"
#To return ALL duplicated values by specifying fromLast argument:
id[duplicated(id) | duplicated(id, fromLast=TRUE)]
## [1] "b" "b" "c" "c" "c" "d" "d" "d" "d"
#Yet another way to return ALL duplicated values, using %in% operator:
id[ id %in% id[duplicated(id)] ]
## [1] "b" "b" "c" "c" "c" "d" "d" "d" "d"
Hope these help. Good luck.
use array_rand()
see php manual -> http://php.net/manual/en/function.array-rand.php
Might be a bit overkill, but when I work with ascii and unicode in same files, repeating decode can be a pain, this is what I use:
def make_unicode(input):
if type(input) != unicode:
input = input.decode('utf-8')
return input
There are two potentially major problem with the leading answer to this question. First, as per the docs:
public BufferedImage getSubimage(int x, int y, int w, int h)
Returns a subimage defined by a specified rectangular region. The returned BufferedImage shares the same data array as the original image.
Essentially, what this means is that result from getSubimage acts as a pointer which points at a subsection of the original image.
Why is this important? Well, if you are planning to edit the subimage for any reason, the edits will also happen to the original image. For example, I ran into this problem when I was using the smaller image in a separate window to zoom in on the original image. (kind of like a magnifying glass). I made it possible to invert the colors to see certain details more easily, but the area that was "zoomed" also got inverted in the original image! So there was a small section of the original image that had inverted colors while the rest of it remained normal. In many cases, this won't matter, but if you want to edit the image, or if you just want a copy of the cropped section, you might want to consider a method.
Which brings us to the second problem. Fortunately, it is not as big a problem as the first. getSubImage shares the same data array as the original image. That means that the entire original image is still stored in memory. Assuming that by "crop" the image you actually want a smaller image, you will need to redraw it as a new image rather than just get the subimage.
Try this:
BufferedImage img = image.getSubimage(startX, startY, endX, endY); //fill in the corners of the desired crop location here
BufferedImage copyOfImage = new BufferedImage(img.getWidth(), img.getHeight(), BufferedImage.TYPE_INT_RGB);
Graphics g = copyOfImage.createGraphics();
g.drawImage(img, 0, 0, null);
return copyOfImage; //or use it however you want
This technique will give you the cropped image you are looking for by itself, without the link back to the original image. This will preserve the integrity of the original image as well as save you the memory overhead of storing the larger image. (If you do dump the original image later)
I have just tested Google Geocoder and got the same problem as you have. I noticed I only get the OVER_QUERY_LIMIT status once every 12 requests So I wait for 1 second (that's the minimum delay to wait) It slows down the application but less than waiting 1 second every request
info = getInfos(getLatLng(code)); //In here I call Google API
record(code, info);
generated++;
if(generated%interval == 0) {
holdOn(delay); // Every x requests, I sleep for 1 second
}
With the basic holdOn method :
private void holdOn(long delay) {
try {
Thread.sleep(delay);
} catch (InterruptedException ex) {
// ignore
}
}
Hope it helps
While I really admire the answer from @hoge both for a very concise technique and a wrapper function to generalize it and I encourage upvotes for it, I am compelled to comment on the other two answers that use temp files (it gnaws at me like fingernails on a chalkboard!).
Assuming the file is not huge, you can force the pipeline to operate in discrete sections--thereby obviating the need for a temp file--with judicious use of parentheses:
(Get-Content $file | Select-Object -Skip 1) | Set-Content $file
... or in short form:
(gc $file | select -Skip 1) | sc $file
Note in 2018: readAsBinaryString
is outdated. For use cases where previously you'd have used it, these days you'd use readAsArrayBuffer
(or in some cases, readAsDataURL
) instead.
readAsBinaryString
says that the data must be represented as a binary string, where:
...every byte is represented by an integer in the range [0..255].
JavaScript originally didn't have a "binary" type (until ECMAScript 5's WebGL support of Typed Array* (details below) -- it has been superseded by ECMAScript 2015's ArrayBuffer) and so they went with a String with the guarantee that no character stored in the String would be outside the range 0..255. (They could have gone with an array of Numbers instead, but they didn't; perhaps large Strings are more memory-efficient than large arrays of Numbers, since Numbers are floating-point.)
If you're reading a file that's mostly text in a western script (mostly English, for instance), then that string is going to look a lot like text. If you read a file with Unicode characters in it, you should notice a difference, since JavaScript strings are UTF-16** (details below) and so some characters will have values above 255, whereas a "binary string" according to the File API spec wouldn't have any values above 255 (you'd have two individual "characters" for the two bytes of the Unicode code point).
If you're reading a file that's not text at all (an image, perhaps), you'll probably still get a very similar result between readAsText
and readAsBinaryString
, but with readAsBinaryString
you know that there won't be any attempt to interpret multi-byte sequences as characters. You don't know that if you use readAsText
, because readAsText
will use an encoding determination to try to figure out what the file's encoding is and then map it to JavaScript's UTF-16 strings.
You can see the effect if you create a file and store it in something other than ASCII or UTF-8. (In Windows you can do this via Notepad; the "Save As" as an encoding drop-down with "Unicode" on it, by which looking at the data they seem to mean UTF-16; I'm sure Mac OS and *nix editors have a similar feature.) Here's a page that dumps the result of reading a file both ways:
<!DOCTYPE HTML>
<html>
<head>
<meta http-equiv="Content-type" content="text/html;charset=UTF-8">
<title>Show File Data</title>
<style type='text/css'>
body {
font-family: sans-serif;
}
</style>
<script type='text/javascript'>
function loadFile() {
var input, file, fr;
if (typeof window.FileReader !== 'function') {
bodyAppend("p", "The file API isn't supported on this browser yet.");
return;
}
input = document.getElementById('fileinput');
if (!input) {
bodyAppend("p", "Um, couldn't find the fileinput element.");
}
else if (!input.files) {
bodyAppend("p", "This browser doesn't seem to support the `files` property of file inputs.");
}
else if (!input.files[0]) {
bodyAppend("p", "Please select a file before clicking 'Load'");
}
else {
file = input.files[0];
fr = new FileReader();
fr.onload = receivedText;
fr.readAsText(file);
}
function receivedText() {
showResult(fr, "Text");
fr = new FileReader();
fr.onload = receivedBinary;
fr.readAsBinaryString(file);
}
function receivedBinary() {
showResult(fr, "Binary");
}
}
function showResult(fr, label) {
var markup, result, n, aByte, byteStr;
markup = [];
result = fr.result;
for (n = 0; n < result.length; ++n) {
aByte = result.charCodeAt(n);
byteStr = aByte.toString(16);
if (byteStr.length < 2) {
byteStr = "0" + byteStr;
}
markup.push(byteStr);
}
bodyAppend("p", label + " (" + result.length + "):");
bodyAppend("pre", markup.join(" "));
}
function bodyAppend(tagName, innerHTML) {
var elm;
elm = document.createElement(tagName);
elm.innerHTML = innerHTML;
document.body.appendChild(elm);
}
</script>
</head>
<body>
<form action='#' onsubmit="return false;">
<input type='file' id='fileinput'>
<input type='button' id='btnLoad' value='Load' onclick='loadFile();'>
</form>
</body>
</html>
If I use that with a "Testing 1 2 3" file stored in UTF-16, here are the results I get:
Text (13): 54 65 73 74 69 6e 67 20 31 20 32 20 33 Binary (28): ff fe 54 00 65 00 73 00 74 00 69 00 6e 00 67 00 20 00 31 00 20 00 32 00 20 00 33 00
As you can see, readAsText
interpreted the characters and so I got 13 (the length of "Testing 1 2 3"), and readAsBinaryString
didn't, and so I got 28 (the two-byte BOM plus two bytes for each character).
* XMLHttpRequest.response with responseType = "arraybuffer"
is supported in HTML 5.
** "JavaScript strings are UTF-16" may seem like an odd statement; aren't they just Unicode? No, a JavaScript string is a series of UTF-16 code units; you see surrogate pairs as two individual JavaScript "characters" even though, in fact, the surrogate pair as a whole is just one character. See the link for details.
In case someone is using MicroK8s: You need a network load balancer.
MicroK8s comes with metallb, you can enable it like this:
microk8s enable metallb
<pending>
should turn into an actual IP address then.
It is best practice to always supply the permission when registering the receiver, otherwise you will receive for any application that sends a matching intent. This can allow malicious apps to broadcast to your receiver.
OK, You can simply do:
git tag -a <tag> <commit-hash>
So if you want to add tag: 1.0.2 to commit e50f795
, just simply do:
git tag -a 1.0.2 e50f795
Also you add a message at the end, using -m
, something like this:
git tag -a 1.0.2 e50f795 -m "my message"
After all, you need to push it to the remote
, to do that, simply do:
git push origin 1.0.2
If you have many tags which you don't want to mention them one by one, just simply do:
git push origin --tags
to push all tags together...
Also, I created the steps in the image below, for more clarification of the steps:
You can also dd the tag in Hub or using tools like SourceTree, to avoid the previous steps, I logged-in to my Bitbucket in this case and doing it from there:
No tags
and click on the +
icon:As @Felix Kling suggested use action='store_true'
:
>>> from argparse import ArgumentParser
>>> p = ArgumentParser()
>>> _ = p.add_argument('-f', '--foo', action='store_true')
>>> args = p.parse_args()
>>> args.foo
False
>>> args = p.parse_args(['-f'])
>>> args.foo
True
Surprisingly this library not mentioned yet
python package named python-slugify, which does a pretty good job of slugifying:
pip install python-slugify
Works like this:
from slugify import slugify
txt = "This is a test ---"
r = slugify(txt)
self.assertEquals(r, "this-is-a-test")
txt = "This -- is a ## test ---"
r = slugify(txt)
self.assertEquals(r, "this-is-a-test")
txt = 'C\'est déjà l\'été.'
r = slugify(txt)
self.assertEquals(r, "cest-deja-lete")
txt = 'Nín hao. Wo shì zhong guó rén'
r = slugify(txt)
self.assertEquals(r, "nin-hao-wo-shi-zhong-guo-ren")
txt = '?????????'
r = slugify(txt)
self.assertEquals(r, "kompiuter")
txt = 'jaja---lol-méméméoo--a'
r = slugify(txt)
self.assertEquals(r, "jaja-lol-mememeoo-a")
You can try this:
var latExp = /^(?=.)-?((8[0-5]?)|([0-7]?[0-9]))?(?:\.[0-9]{1,20})?$/;
var lngExp = /^(?=.)-?((0?[8-9][0-9])|180|([0-1]?[0-7]?[0-9]))?(?:\.[0-9]{1,20})?$/;
int op = 0;
string in = string.Empty;
do
{
Console.WriteLine("enter choice");
in = Console.ReadLine();
} while (!int.TryParse(in, out op));
map
isn't particularly pythonic. I would recommend using list comprehensions instead:
map(f, iterable)
is basically equivalent to:
[f(x) for x in iterable]
map
on its own can't do a Cartesian product, because the length of its output list is always the same as its input list. You can trivially do a Cartesian product with a list comprehension though:
[(a, b) for a in iterable_a for b in iterable_b]
The syntax is a little confusing -- that's basically equivalent to:
result = []
for a in iterable_a:
for b in iterable_b:
result.append((a, b))
Yes, LINQ to Objects supports this with Enumerable.Concat
:
var together = first.Concat(second);
NB: Should first
or second
be null you would receive a ArgumentNullException
. To avoid this & treat nulls as you would an empty set, use the null coalescing operator like so:
var together = (first ?? Enumerable.Empty<string>()).Concat(second ?? Enumerable.Empty<string>()); //amending `<string>` to the appropriate type
I know it's a late answer, but I think this is what OP asked for.
myobj[Object.keys(myobj)[0]];
There is no such option. What you can do is to run mkdir -p before copying the file
I made a very cool script you can use to copy files in locations that doesn't exist
#!/bin/bash
if [ ! -d "$2" ]; then
mkdir -p "$2"
fi
cp -R "$1" "$2"
Now just save it, give it permissions and run it using
./cp-improved SOURCE DEST
I put -R option but it's just a draft, I know it can be and you will improve it in many ways. Hope it helps you
Here is something really neat and simple (atleast I believe so :)) and requires no manipulation of date to be cloned or overloading any of browser's native functions like toJSON (reference: How to JSON stringify a javascript Date and preserve timezone, courtsy Shawson)
Pass a replacer function to JSON.stringify that stringifies stuff to your heart's content!!! This way you don't have to do hour and minute diffs or any other manipulations.
I have put in console.logs to see intermediate results so it is clear what is going on and how recursion is working. That reveals something worthy of notice: value param to replacer is already converted to ISO date format :). Use this[key] to work with original data.
var replacer = function(key, value)
{
var returnVal = value;
if(this[key] instanceof Date)
{
console.log("replacer called with key - ", key, " value - ", value, this[key]);
returnVal = this[key].toString();
/* Above line does not strictly speaking clone the date as in the cloned object
* it is a string in same format as the original but not a Date object. I tried
* multiple things but was unable to cause a Date object being created in the
* clone.
* Please Heeeeelp someone here!
returnVal = new Date(JSON.parse(JSON.stringify(this[key]))); //OR
returnVal = new Date(this[key]); //OR
returnVal = this[key]; //careful, returning original obj so may have potential side effect
*/
}
console.log("returning value: ", returnVal);
/* if undefined is returned, the key is not at all added to the new object(i.e. clone),
* so return null. null !== undefined but both are falsy and can be used as such*/
return this[key] === undefined ? null : returnVal;
};
ab = {prop1: "p1", prop2: [1, "str2", {p1: "p1inner", p2: undefined, p3: null, p4date: new Date()}]};
var abstr = JSON.stringify(ab, replacer);
var abcloned = JSON.parse(abstr);
console.log("ab is: ", ab);
console.log("abcloned is: ", abcloned);
/* abcloned is:
* {
"prop1": "p1",
"prop2": [
1,
"str2",
{
"p1": "p1inner",
"p2": null,
"p3": null,
"p4date": "Tue Jun 11 2019 18:47:50 GMT+0530 (India Standard Time)"
}
]
}
Note p4date is string not Date object but format and timezone are completely preserved.
*/
Make sure you've set your target API (different from the target SDK) in the Project Properties (not the manifest) to be at least 4.0/API 14.
You may do:
a= [1,1,1,2,2,3]
delete_list = [1,3]
delete_list.each do |del|
a.delete_at(a.index(del))
end
result : [1, 1, 2, 2]
Image can not be shown until it serves from any server. so you need to upload the image to your server to show its preview.
After the post from Gabor Grothendieck post at the r-help mailing list
years<-c("20 years old", "1 years old")
library(gsubfn)
pat <- "[-+.e0-9]*\\d"
sapply(years, function(x) strapply(x, pat, as.numeric)[[1]])
Use the files
filelist of the element instead of val()
$("input[type=file]").on('change',function(){
alert(this.files[0].name);
});
If you need one single regex, try:
(?=.*\d)(?=.*[a-z])(?=.*[A-Z])(?=.*\W)
A short explanation:
(?=.*[a-z]) // use positive look ahead to see if at least one lower case letter exists
(?=.*[A-Z]) // use positive look ahead to see if at least one upper case letter exists
(?=.*\d) // use positive look ahead to see if at least one digit exists
(?=.*\W]) // use positive look ahead to see if at least one non-word character exists
And I agree with SilentGhost, \W
might be a bit broad. I'd replace it with a character set like this: [-+_!@#$%^&*.,?]
(feel free to add more of course!)
<input id="inputFileToLoad" type="file" onchange="encodeImageFileAsURL();" />
<div id="imgTest"></div>
<script type='text/javascript'>
function encodeImageFileAsURL() {
var filesSelected = document.getElementById("inputFileToLoad").files;
if (filesSelected.length > 0) {
var fileToLoad = filesSelected[0];
var fileReader = new FileReader();
fileReader.onload = function(fileLoadedEvent) {
var srcData = fileLoadedEvent.target.result; // <--- data: base64
var newImage = document.createElement('img');
newImage.src = srcData;
document.getElementById("imgTest").innerHTML = newImage.outerHTML;
alert("Converted Base64 version is " + document.getElementById("imgTest").innerHTML);
console.log("Converted Base64 version is " + document.getElementById("imgTest").innerHTML);
}
fileReader.readAsDataURL(fileToLoad);
}
}
</script>
in the video source change the type to "video/quicktime"
<video width="400" controls Autoplay=autoplay>
<source src="D:/mov1.mov" type="video/quicktime">
</video>
I had the same problem. I used a trigger and in that trigger I called a procedure which computed some values into 2 OUT variables. When I tried to print the result in the trigger body, nothing showed on screen. But then I solved this problem by making 2 local variables in a function, computed what I need with them and finally, copied those variables in your OUT procedure variables. I hope it'll be useful and successful!
Use timeit. http://docs.python.org/library/timeit.html
Assuming you're looking for a quick tactical fix, what you need to do is make sure the cell image is initialized and also that the cell's row is still visible, e.g:
- (UITableViewCell *)tableView:(UITableView *)tableView cellForRowAtIndexPath:(NSIndexPath *)indexPath {
MyCell *cell = [tableView dequeueReusableCellWithIdentifier:@"cell" forIndexPath:indexPath];
cell.poster.image = nil; // or cell.poster.image = [UIImage imageNamed:@"placeholder.png"];
NSURL *url = [NSURL URLWithString:[NSString stringWithFormat:@"http://myurl.com/%@.jpg", self.myJson[indexPath.row][@"movieId"]]];
NSURLSessionTask *task = [[NSURLSession sharedSession] dataTaskWithURL:url completionHandler:^(NSData * _Nullable data, NSURLResponse * _Nullable response, NSError * _Nullable error) {
if (data) {
UIImage *image = [UIImage imageWithData:data];
if (image) {
dispatch_async(dispatch_get_main_queue(), ^{
MyCell *updateCell = (id)[tableView cellForRowAtIndexPath:indexPath];
if (updateCell)
updateCell.poster.image = image;
});
}
}
}];
[task resume];
return cell;
}
The above code addresses a few problems stemming from the fact that the cell is reused:
You're not initializing the cell image before initiating the background request (meaning that the last image for the dequeued cell will still be visible while the new image is downloading). Make sure to nil
the image
property of any image views or else you'll see the flickering of images.
A more subtle issue is that on a really slow network, your asynchronous request might not finish before the cell scrolls off the screen. You can use the UITableView
method cellForRowAtIndexPath:
(not to be confused with the similarly named UITableViewDataSource
method tableView:cellForRowAtIndexPath:
) to see if the cell for that row is still visible. This method will return nil
if the cell is not visible.
The issue is that the cell has scrolled off by the time your async method has completed, and, worse, the cell has been reused for another row of the table. By checking to see if the row is still visible, you'll ensure that you don't accidentally update the image with the image for a row that has since scrolled off the screen.
Somewhat unrelated to the question at hand, I still felt compelled to update this to leverage modern conventions and API, notably:
Use NSURLSession
rather than dispatching -[NSData contentsOfURL:]
to a background queue;
Use dequeueReusableCellWithIdentifier:forIndexPath:
rather than dequeueReusableCellWithIdentifier:
(but make sure to use cell prototype or register class or NIB for that identifier); and
I used a class name that conforms to Cocoa naming conventions (i.e. start with the uppercase letter).
Even with these corrections, there are issues:
The above code is not caching the downloaded images. That means that if you scroll an image off screen and back on screen, the app may try to retrieve the image again. Perhaps you'll be lucky enough that your server response headers will permit the fairly transparent caching offered by NSURLSession
and NSURLCache
, but if not, you'll be making unnecessary server requests and offering a much slower UX.
We're not canceling requests for cells that scroll off screen. Thus, if you rapidly scroll to the 100th row, the image for that row could be backlogged behind requests for the previous 99 rows that aren't even visible anymore. You always want to make sure you prioritize requests for visible cells for the best UX.
The simplest fix that addresses these issues is to use a UIImageView
category, such as is provided with SDWebImage or AFNetworking. If you want, you can write your own code to deal with the above issues, but it's a lot of work, and the above UIImageView
categories have already done this for you.
You have set the upstream of that branch
(see:
--set-upstream-to
all the time?"git branch -f --track my_local_branch origin/my_remote_branch # OR (if my_local_branch is currently checked out): $ git branch --set-upstream-to my_local_branch origin/my_remote_branch
(git branch -f --track
won't work if the branch is checked out: use the second command git branch --set-upstream-to
instead, or you would get "fatal: Cannot force update the current branch.
")
That means your branch is already configured with:
branch.my_local_branch.remote origin
branch.my_local_branch.merge my_remote_branch
Git already has all the necessary information.
In that case:
# if you weren't already on my_local_branch branch:
git checkout my_local_branch
# then:
git pull
is enough.
If you hadn't establish that upstream branch relationship when it came to push your 'my_local_branch
', then a simple git push -u origin my_local_branch:my_remote_branch
would have been enough to push and set the upstream branch.
After that, for the subsequent pulls/pushes, git pull
or git push
would, again, have been enough.
Try using a custom answer.
For example:
import org.mockito.Mockito;
import org.mockito.invocation.InvocationOnMock;
import org.mockito.stubbing.Answer;
public class CustomAnswer implements Answer<Object> {
public Object answer(InvocationOnMock invocation) throws Throwable {
Answer<Object> answer = null;
if (isAbstract(invocation.getMethod().getModifiers())) {
answer = Mockito.RETURNS_DEFAULTS;
} else {
answer = Mockito.CALLS_REAL_METHODS;
}
return answer.answer(invocation);
}
}
It will return the mock for abstract methods and will call the real method for concrete methods.
So I think I found a method that works cleaner than the ones above because it doesn't require any global variables. I got it from here: http://nshipster.com/swift-objc-runtime/
The gist is that you use a struct like so:
extension UIViewController {
private struct AssociatedKeys {
static var DescriptiveName = "nsh_DescriptiveName"
}
var descriptiveName: String? {
get {
return objc_getAssociatedObject(self, &AssociatedKeys.DescriptiveName) as? String
}
set {
if let newValue = newValue {
objc_setAssociatedObject(
self,
&AssociatedKeys.DescriptiveName,
newValue as NSString?,
UInt(OBJC_ASSOCIATION_RETAIN_NONATOMIC)
)
}
}
}
}
UPDATE for Swift 2
private struct AssociatedKeys {
static var displayed = "displayed"
}
//this lets us check to see if the item is supposed to be displayed or not
var displayed : Bool {
get {
guard let number = objc_getAssociatedObject(self, &AssociatedKeys.displayed) as? NSNumber else {
return true
}
return number.boolValue
}
set(value) {
objc_setAssociatedObject(self,&AssociatedKeys.displayed,NSNumber(bool: value),objc_AssociationPolicy.OBJC_ASSOCIATION_RETAIN_NONATOMIC)
}
}
Specify all fields but your ID field.
INSERT INTO MyTable (FIELD2, FIELD3, ..., FIELD529, PreviousId)
SELECT FIELD2, NULL, ..., FIELD529, FIELD1
FROM MyTable
WHERE FIELD1 = @Id;
Other answers added great stuff. I will add just my 2 cents.
According to Wikipedia we know for sure: a file descriptor is a non-negative integer. The most important thing I think is missing, would be to say:
File descriptors are bound to a process ID.
We know most famous file descriptors are 0, 1 and 2.
0 corresponds to STDIN
, 1 to STDOUT
, and 2 to STDERR
.
Say, take shell processes as an example and how does it apply for it?
Check out this code
#>sleep 1000 &
[12] 14726
We created a process with the id 14726 (PID).
Using the lsof -p 14726
we can get the things like this:
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
sleep 14726 root cwd DIR 8,1 4096 1201140 /home/x
sleep 14726 root rtd DIR 8,1 4096 2 /
sleep 14726 root txt REG 8,1 35000 786587 /bin/sleep
sleep 14726 root mem REG 8,1 11864720 1186503 /usr/lib/locale/locale-archive
sleep 14726 root mem REG 8,1 2030544 137184 /lib/x86_64-linux-gnu/libc-2.27.so
sleep 14726 root mem REG 8,1 170960 137156 /lib/x86_64-linux-gnu/ld-2.27.so
sleep 14726 root 0u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 1u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 2u CHR 136,6 0t0 9 /dev/pts/6
The 4-th column FD and the very next column TYPE correspond to the File Descriptor and the File Descriptor type.
Some of the values for the FD can be:
cwd – Current Working Directory
txt – Text file
mem – Memory mapped file
mmap – Memory mapped device
But the real file descriptor is under:
NUMBER – Represent the actual file descriptor.
The character after the number i.e "1u", represents the mode in which the file is opened. r for read, w for write, u for read and write.
TYPE specifies the type of the file. Some of the values of TYPEs are:
REG – Regular File
DIR – Directory
FIFO – First In First Out
But all file descriptors are CHR – Character special file (or character device file)
Now, we can identify the File Descriptors for STDIN
, STDOUT
and STDERR
easy with lsof -p PID
, or we can see the same if we ls /proc/PID/fd
.
Note also that file descriptor table that kernel keeps track of is not the same as files table or inodes table. These are separate, as some other answers explained.
You may ask yourself where are these file descriptors physically and what is stored in /dev/pts/6
for instance
sleep 14726 root 0u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 1u CHR 136,6 0t0 9 /dev/pts/6
sleep 14726 root 2u CHR 136,6 0t0 9 /dev/pts/6
Well, /dev/pts/6
lives purely in memory. These are not regular files, but so called character device files. You can check this with: ls -l /dev/pts/6
and they will start with c
, in my case crw--w----
.
Just to recall most Linux like OS define seven types of files:
// for choosing multiple images declare variables
int PICK_IMAGE_MULTIPLE = 2;
String realImagePath;
// After requesting FILE READ PERMISSION may be on button click
Intent intent = new Intent();
intent.setType("image/*");
intent.putExtra(Intent.EXTRA_ALLOW_MULTIPLE, true);
intent.setAction(Intent.ACTION_GET_CONTENT);
startActivityForResult(Intent.createChooser(intent,"Select Images"), PICK_IMAGE_MULTIPLE);
public void onActivityResult(int requestCode, int resultCode, Intent data) {
super.onActivityResult(requestCode, resultCode, data);// FOR CHOOSING MULTIPLE IMAGES
try {
// When an Image is picked
if (requestCode == PICK_IMAGE_MULTIPLE && resultCode == RESULT_OK
&& null != data) {
if (data.getClipData() != null) {
int count = data.getClipData().getItemCount(); //evaluate the count before the for loop --- otherwise, the count is evaluated every loop.
for (int i = 0; i < count; i++) {
Uri imageUri = data.getClipData().getItemAt(i).getUri();
realImagePath = getPath(this, imageUri);
//do something with the image (save it to some directory or whatever you need to do with it here)
Log.e("ImagePath", "onActivityResult: " + realImagePath);
}
} else if (data.getData() != null) {
Uri imageUri = data.getData();
realImagePath = getPath(this, imageUri);
//do something with the image (save it to some directory or whatever you need to do with it here)
Log.e("ImagePath", "onActivityResult: " + realImagePath);
}
}
} catch (Exception e) {
Toast.makeText(this, "Something went wrong", Toast.LENGTH_LONG)
.show();
}
}
public static String getPath(final Context context, final Uri uri) {
// DocumentProvider
if (DocumentsContract.isDocumentUri(context, uri)) {
// ExternalStorageProvider
if (isExternalStorageDocument(uri)) {
final String docId = DocumentsContract.getDocumentId(uri);
final String[] split = docId.split(":");
final String type = split[0];
if ("primary".equalsIgnoreCase(type)) {
return Environment.getExternalStorageDirectory() + "/" + split[1];
}
// TODO handle non-primary volumes
}
// DownloadsProvider
else if (isDownloadsDocument(uri)) {
final String id = DocumentsContract.getDocumentId(uri);
final Uri contentUri = ContentUris.withAppendedId(
Uri.parse("content://downloads/public_downloads"), Long.parseLong(id));
return getDataColumn(context, contentUri, null, null);
}
// MediaProvider
else if (isMediaDocument(uri)) {
final String docId = DocumentsContract.getDocumentId(uri);
final String[] split = docId.split(":");
final String type = split[0];
Uri contentUri = null;
if ("image".equals(type)) {
contentUri = MediaStore.Images.Media.EXTERNAL_CONTENT_URI;
} else if ("video".equals(type)) {
contentUri = MediaStore.Video.Media.EXTERNAL_CONTENT_URI;
} else if ("audio".equals(type)) {
contentUri = MediaStore.Audio.Media.EXTERNAL_CONTENT_URI;
}
final String selection = "_id=?";
final String[] selectionArgs = new String[]{
split[1]
};
return getDataColumn(context, contentUri, selection, selectionArgs);
}
}
// MediaStore (and general)
else if ("content".equalsIgnoreCase(uri.getScheme())) {
return getDataColumn(context, uri, null, null);
}
// File
else if ("file".equalsIgnoreCase(uri.getScheme())) {
return uri.getPath();
}
return null;
}
/**
* Get the value of the data column for this Uri. This is useful for
* MediaStore Uris, and other file-based ContentProviders.
*
* @param context The context.
* @param uri The Uri to query.
* @param selection (Optional) Filter used in the query.
* @param selectionArgs (Optional) Selection arguments used in the query.
* @return The value of the _data column, which is typically a file path.
*/
public static String getDataColumn(Context context, Uri uri, String selection,
String[] selectionArgs) {
Cursor cursor = null;
final String column = "_data";
final String[] projection = {
column
};
try {
cursor = context.getContentResolver().query(uri, projection, selection, selectionArgs,
null);
if (cursor != null && cursor.moveToFirst()) {
final int column_index = cursor.getColumnIndexOrThrow(column);
return cursor.getString(column_index);
}
} finally {
if (cursor != null)
cursor.close();
}
return null;
}
/**
* @param uri The Uri to check.
* @return Whether the Uri authority is ExternalStorageProvider.
*/
public static boolean isExternalStorageDocument(Uri uri) {
return "com.android.externalstorage.documents".equals(uri.getAuthority());
}
/**
* @param uri The Uri to check.
* @return Whether the Uri authority is DownloadsProvider.
*/
public static boolean isDownloadsDocument(Uri uri) {
return "com.android.providers.downloads.documents".equals(uri.getAuthority());
}
/**
* @param uri The Uri to check.
* @return Whether the Uri authority is MediaProvider.
*/
public static boolean isMediaDocument(Uri uri) {
return "com.android.providers.media.documents".equals(uri.getAuthority());
}
this worked perfectly for me credits: Get real path from URI, Android KitKat new storage access framework
All you need to do is to go to the control panel > Computer Management > Services and manually start the SQL express or SQL server. It worked for me.
Good luck.
Pull from the repository to a new directory, then rename the old one to old_crufty
, and the new one to my_real_webserver_directory
, and you're good to go.
If your intention is that every single file is in SVN, then this is a good way to test your theory. If your intention is that some files are not in SVN, then use Brian's copy/paste technique.
Try this. The scope of local variables defined by "template" directive.
<table>
<template ngFor let-group="$implicit" [ngForOf]="groups">
<tr>
<td>
<h2>{{group.name}}</h2>
</td>
</tr>
<tr *ngFor="let item of group.items">
<td>{{item}}</td>
</tr>
</template>
</table>
<script type="text/javascript">
$("#textFileID").html("Loading...").load("URL TEXT");
</script>
<div id="textFileID"></div>
You've mixed tabs and spaces. __init__
is actually defined nested inside another method, so your class doesn't have its own __init__
method, and it inherits object.__init__
instead. Open your code in Notepad instead of whatever editor you're using, and you'll see your code as Python's tab-handling rules see it.
This is why you should never mix tabs and spaces. Stick to one or the other. Spaces are recommended.
This will do it recursively for you:
find /path/to/base/dir/* -type d -ctime +10 -exec rm -rf {} \;
Explanation:
find
: the unix command for finding files / directories / links etc./path/to/base/dir
: the directory to start your search in.-type d
: only find directories-ctime +10
: only consider the ones with modification time older than 10 days-exec ... \;
: for each such result found, do the following command in ...
rm -rf {}
: recursively force remove the directory; the {}
part is where the find result gets substituted into from the previous part.Alternatively, use:
find /path/to/base/dir/* -type d -ctime +10 | xargs rm -rf
Which is a bit more efficient, because it amounts to:
rm -rf dir1 dir2 dir3 ...
as opposed to:
rm -rf dir1; rm -rf dir2; rm -rf dir3; ...
as in the -exec
method.
With modern versions of find
, you can replace the ;
with +
and it will do the equivalent of the xargs
call for you, passing as many files as will fit on each exec system call:
find . -type d -ctime +10 -exec rm -rf {} +
You can do it in a less than a minute with Google Drive (and free, no hassles)
• Bulk Upload all your images on imgur.com
• Copy the Links of all the images together, appended with .jpg. Only imgur lets you do copy all the image links together, do that using the image tab top right.
• Use http://TextMechanic.co to prepend and append each line with this:
Prefix : =image("
AND
Suffix : ", 1)
So that it looks like this =image("URL", 1)
• Copy All
• Paste it in Google Spreadsheet
• Voila!
References :
http://www.labnol.org/internet/images-in-google-spreadsheet/18167/
https://support.google.com/drive/bin/answer.py?hl=en&answer=87037&from=1068225&rd=1
In spanish keyboard without changing anything I can make a comment with the keys:
cmd + -
OR
cmd + alt + -
This works because in english keyboard / is located at the same place than - on a spanish keyboard
You can also do this with linq if you'd like
var names = new List<string>() { "John", "Anna", "Monica" };
var joinedNames = names.Aggregate((a, b) => a + ", " + b);
Although I prefer the non-linq syntax in Quartermeister's answer and I think Aggregate
might perform slower (probably more string concatenation operations).
In my opinion the best solutions is in this link: http://www.myonlinetraininghub.com/excel-factor-12-secret-evaluate-function
Here is a summary: 1) In cell A1 enter 1, 2) In cell A2 enter 2, 3) In cell A3 enter +, 4) Create a named range, with "=Evaluate(A1 & A3 & A2)" in the refers to field while creating the named range. Lets call this named range "testEval", 5) In cell A4 enter =testEval,
Cell A4 should have the value 3 in it.
Notes: a) Requires no programming/vba. b) I did this in Excel 2013 and it works.
java.util.Date temp = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSSSSS").parse("2012-07-10 14:58:00.000000");
The mm
is minutes you want MM
CODE
public class Test {
public static void main(String[] args) throws ParseException {
java.util.Date temp = new SimpleDateFormat("yyyy-MM-dd HH:mm:ss.SSSSSS")
.parse("2012-07-10 14:58:00.000000");
System.out.println(temp);
}
}
Prints:
Tue Jul 10 14:58:00 EDT 2012
>>> import pandas as pd
>>> df = pd.DataFrame({'Position':[1,2,3,4,5], 'Letter':['a', 'b', 'c', 'd', 'e']})
>>> dict(sorted(df.values.tolist())) # Sort of sorted...
{'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5}
>>> from collections import OrderedDict
>>> OrderedDict(df.values.tolist())
OrderedDict([('a', 1), ('b', 2), ('c', 3), ('d', 4), ('e', 5)])
Explaining solution: dict(sorted(df.values.tolist()))
Given:
df = pd.DataFrame({'Position':[1,2,3,4,5], 'Letter':['a', 'b', 'c', 'd', 'e']})
[out]:
Letter Position
0 a 1
1 b 2
2 c 3
3 d 4
4 e 5
Try:
# Get the values out to a 2-D numpy array,
df.values
[out]:
array([['a', 1],
['b', 2],
['c', 3],
['d', 4],
['e', 5]], dtype=object)
Then optionally:
# Dump it into a list so that you can sort it using `sorted()`
sorted(df.values.tolist()) # Sort by key
Or:
# Sort by value:
from operator import itemgetter
sorted(df.values.tolist(), key=itemgetter(1))
[out]:
[['a', 1], ['b', 2], ['c', 3], ['d', 4], ['e', 5]]
Lastly, cast the list of list of 2 elements into a dict.
dict(sorted(df.values.tolist()))
[out]:
{'a': 1, 'b': 2, 'c': 3, 'd': 4, 'e': 5}
Answering @sbradbio comment:
If there are multiple values for a specific key and you would like to keep all of them, it's the not the most efficient but the most intuitive way is:
from collections import defaultdict
import pandas as pd
multivalue_dict = defaultdict(list)
df = pd.DataFrame({'Position':[1,2,4,4,4], 'Letter':['a', 'b', 'd', 'e', 'f']})
for idx,row in df.iterrows():
multivalue_dict[row['Position']].append(row['Letter'])
[out]:
>>> print(multivalue_dict)
defaultdict(list, {1: ['a'], 2: ['b'], 4: ['d', 'e', 'f']})
import java.util.Map;
public class KeyValue<K, V> implements Map.Entry<K, V>
{
private K key;
private V value;
public KeyValue(K key, V value)
{
this.key = key;
this.value = value;
}
public K getKey()
{
return this.key;
}
public V getValue()
{
return this.value;
}
public K setKey(K key)
{
return this.key = key;
}
public V setValue(V value)
{
return this.value = value;
}
}
Margin = new Thickness(0, 0, 0, 0);
There is no OOTB feature at this moment which allows this. One way to achieve that could be to write a custom InputFormat and/or SerDe that will do this for you. You might this JIRA useful : https://issues.apache.org/jira/browse/HIVE-3751. (not related directly to your problem though).
You can use substring
method
s = s.substring(0, s.length - 1) //removes last character
another alternative is slice
method
Try this:
SELECT Count(*)
FROM <DATABASE_NAME>.INFORMATION_SCHEMA.TABLES
WHERE TABLE_TYPE = 'BASE TABLE'
You may want to read this: "“Out Of Memory” Does Not Refer to Physical Memory" by Eric Lippert.
In short, and very simplified, "Out of memory" does not really mean that the amount of available memory is too small. The most common reason is that within the current address space, there is no contiguous portion of memory that is large enough to serve the wanted allocation. If you have 100 blocks, each 4 MB large, that is not going to help you when you need one 5 MB block.
Key Points: